Jul 11 00:35:13.817837 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:18:23 -00 2025 Jul 11 00:35:13.817858 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:35:13.817869 kernel: BIOS-provided physical RAM map: Jul 11 00:35:13.817876 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 11 00:35:13.817882 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 11 00:35:13.817889 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 11 00:35:13.817896 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 11 00:35:13.817903 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 11 00:35:13.817924 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 11 00:35:13.817931 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 11 00:35:13.817937 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 11 00:35:13.817944 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 11 00:35:13.817950 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 11 00:35:13.817957 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 11 00:35:13.817967 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 11 00:35:13.817974 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 11 00:35:13.817981 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 11 00:35:13.817988 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 11 00:35:13.817995 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 11 00:35:13.818002 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 11 00:35:13.818009 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 11 00:35:13.818016 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 11 00:35:13.818023 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 11 00:35:13.818030 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:35:13.818037 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 11 00:35:13.818046 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:35:13.818053 kernel: NX (Execute Disable) protection: active Jul 11 00:35:13.818060 kernel: APIC: Static calls initialized Jul 11 00:35:13.818067 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 11 00:35:13.818074 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 11 00:35:13.818081 kernel: extended physical RAM map: Jul 11 00:35:13.818088 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 11 00:35:13.818095 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 11 00:35:13.818102 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 11 00:35:13.818109 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 11 00:35:13.818116 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 11 00:35:13.818125 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 11 00:35:13.818132 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 11 00:35:13.818139 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 11 00:35:13.818147 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 11 00:35:13.818157 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 11 00:35:13.818164 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 11 00:35:13.818173 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 11 00:35:13.818181 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 11 00:35:13.818188 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 11 00:35:13.818195 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 11 00:35:13.818203 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 11 00:35:13.818210 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 11 00:35:13.818217 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 11 00:35:13.818225 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 11 00:35:13.818232 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 11 00:35:13.818241 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 11 00:35:13.818249 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 11 00:35:13.818256 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 11 00:35:13.818263 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 11 00:35:13.818270 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:35:13.818278 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 11 00:35:13.818285 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:35:13.818292 kernel: efi: EFI v2.7 by EDK II Jul 11 00:35:13.818300 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 11 00:35:13.818307 kernel: random: crng init done Jul 11 00:35:13.818314 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 11 00:35:13.818322 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 11 00:35:13.818331 kernel: secureboot: Secure boot disabled Jul 11 00:35:13.818338 kernel: SMBIOS 2.8 present. Jul 11 00:35:13.818346 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 11 00:35:13.818353 kernel: DMI: Memory slots populated: 1/1 Jul 11 00:35:13.818360 kernel: Hypervisor detected: KVM Jul 11 00:35:13.818368 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:35:13.818375 kernel: kvm-clock: using sched offset of 3742070126 cycles Jul 11 00:35:13.818382 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:35:13.818390 kernel: tsc: Detected 2794.746 MHz processor Jul 11 00:35:13.818398 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:35:13.818405 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:35:13.818415 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 11 00:35:13.818422 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 11 00:35:13.818430 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:35:13.818437 kernel: Using GB pages for direct mapping Jul 11 00:35:13.818445 kernel: ACPI: Early table checksum verification disabled Jul 11 00:35:13.818452 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 11 00:35:13.818460 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:35:13.818468 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:35:13.818475 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:35:13.818484 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 11 00:35:13.818492 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:35:13.818499 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:35:13.818507 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:35:13.818514 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:35:13.818522 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 11 00:35:13.818529 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 11 00:35:13.818537 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 11 00:35:13.818546 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 11 00:35:13.818554 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 11 00:35:13.818561 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 11 00:35:13.818568 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 11 00:35:13.818576 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 11 00:35:13.818583 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 11 00:35:13.818590 kernel: No NUMA configuration found Jul 11 00:35:13.818598 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 11 00:35:13.818605 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 11 00:35:13.818613 kernel: Zone ranges: Jul 11 00:35:13.818622 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:35:13.818630 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 11 00:35:13.818637 kernel: Normal empty Jul 11 00:35:13.818644 kernel: Device empty Jul 11 00:35:13.818652 kernel: Movable zone start for each node Jul 11 00:35:13.818659 kernel: Early memory node ranges Jul 11 00:35:13.818677 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 11 00:35:13.818687 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 11 00:35:13.818696 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 11 00:35:13.818707 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 11 00:35:13.818714 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 11 00:35:13.818722 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 11 00:35:13.818729 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 11 00:35:13.818745 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 11 00:35:13.818753 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 11 00:35:13.818773 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:35:13.818781 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 11 00:35:13.818798 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 11 00:35:13.818806 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:35:13.818814 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 11 00:35:13.818822 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 11 00:35:13.818831 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 11 00:35:13.818843 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 11 00:35:13.818851 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 11 00:35:13.818859 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:35:13.818866 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:35:13.818876 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:35:13.818884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:35:13.818892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:35:13.818900 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:35:13.819932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:35:13.819943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:35:13.819951 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:35:13.819959 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:35:13.819967 kernel: TSC deadline timer available Jul 11 00:35:13.819978 kernel: CPU topo: Max. logical packages: 1 Jul 11 00:35:13.819986 kernel: CPU topo: Max. logical dies: 1 Jul 11 00:35:13.819994 kernel: CPU topo: Max. dies per package: 1 Jul 11 00:35:13.820001 kernel: CPU topo: Max. threads per core: 1 Jul 11 00:35:13.820009 kernel: CPU topo: Num. cores per package: 4 Jul 11 00:35:13.820017 kernel: CPU topo: Num. threads per package: 4 Jul 11 00:35:13.820025 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 11 00:35:13.820032 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:35:13.820040 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:35:13.820048 kernel: kvm-guest: setup PV sched yield Jul 11 00:35:13.820058 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 11 00:35:13.820065 kernel: Booting paravirtualized kernel on KVM Jul 11 00:35:13.820073 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:35:13.820082 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:35:13.820090 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 11 00:35:13.820097 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 11 00:35:13.820105 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:35:13.820113 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:35:13.820121 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:35:13.820132 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:35:13.820140 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:35:13.820148 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:35:13.820156 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:35:13.820163 kernel: Fallback order for Node 0: 0 Jul 11 00:35:13.820171 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 11 00:35:13.820179 kernel: Policy zone: DMA32 Jul 11 00:35:13.820187 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:35:13.820197 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:35:13.820205 kernel: ftrace: allocating 40095 entries in 157 pages Jul 11 00:35:13.820213 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 00:35:13.820221 kernel: Dynamic Preempt: voluntary Jul 11 00:35:13.820228 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:35:13.820240 kernel: rcu: RCU event tracing is enabled. Jul 11 00:35:13.820248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:35:13.820256 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:35:13.820264 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:35:13.820274 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:35:13.820282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:35:13.820289 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:35:13.820297 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:35:13.820305 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:35:13.820313 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:35:13.820323 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:35:13.820331 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:35:13.820339 kernel: Console: colour dummy device 80x25 Jul 11 00:35:13.820351 kernel: printk: legacy console [ttyS0] enabled Jul 11 00:35:13.820360 kernel: ACPI: Core revision 20240827 Jul 11 00:35:13.820369 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:35:13.820378 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:35:13.820386 kernel: x2apic enabled Jul 11 00:35:13.820394 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:35:13.820402 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:35:13.820410 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:35:13.820417 kernel: kvm-guest: setup PV IPIs Jul 11 00:35:13.820427 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:35:13.820435 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 11 00:35:13.820444 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 11 00:35:13.820452 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:35:13.820459 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:35:13.820467 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:35:13.820475 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:35:13.820483 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:35:13.820491 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:35:13.820501 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:35:13.820508 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:35:13.820516 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:35:13.820524 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:35:13.820532 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:35:13.820541 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:35:13.820549 kernel: x86/bugs: return thunk changed Jul 11 00:35:13.820556 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:35:13.820566 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:35:13.820574 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:35:13.820582 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:35:13.820590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:35:13.820597 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:35:13.820605 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:35:13.820613 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:35:13.820621 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 00:35:13.820628 kernel: landlock: Up and running. Jul 11 00:35:13.820638 kernel: SELinux: Initializing. Jul 11 00:35:13.820646 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:35:13.820654 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:35:13.820670 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:35:13.820681 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:35:13.820690 kernel: ... version: 0 Jul 11 00:35:13.820700 kernel: ... bit width: 48 Jul 11 00:35:13.820710 kernel: ... generic registers: 6 Jul 11 00:35:13.820719 kernel: ... value mask: 0000ffffffffffff Jul 11 00:35:13.820730 kernel: ... max period: 00007fffffffffff Jul 11 00:35:13.820737 kernel: ... fixed-purpose events: 0 Jul 11 00:35:13.820745 kernel: ... event mask: 000000000000003f Jul 11 00:35:13.820753 kernel: signal: max sigframe size: 1776 Jul 11 00:35:13.820761 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:35:13.820769 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:35:13.820785 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 00:35:13.820793 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:35:13.820801 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:35:13.820809 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:35:13.820819 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:35:13.820827 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 11 00:35:13.820835 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 137196K reserved, 0K cma-reserved) Jul 11 00:35:13.820843 kernel: devtmpfs: initialized Jul 11 00:35:13.820850 kernel: x86/mm: Memory block size: 128MB Jul 11 00:35:13.820858 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 11 00:35:13.820866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 11 00:35:13.820874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 11 00:35:13.820884 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 11 00:35:13.820892 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 11 00:35:13.820900 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 11 00:35:13.820924 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:35:13.820932 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:35:13.820940 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:35:13.820948 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:35:13.820956 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:35:13.820963 kernel: audit: type=2000 audit(1752194111.852:1): state=initialized audit_enabled=0 res=1 Jul 11 00:35:13.820974 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:35:13.820982 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:35:13.820989 kernel: cpuidle: using governor menu Jul 11 00:35:13.820997 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:35:13.821005 kernel: dca service started, version 1.12.1 Jul 11 00:35:13.821013 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 11 00:35:13.821021 kernel: PCI: Using configuration type 1 for base access Jul 11 00:35:13.821028 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:35:13.821036 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:35:13.821046 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:35:13.821054 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:35:13.821061 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:35:13.821069 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:35:13.821077 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:35:13.821085 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:35:13.821093 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:35:13.821100 kernel: ACPI: Interpreter enabled Jul 11 00:35:13.821108 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:35:13.821118 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:35:13.821125 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:35:13.821133 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:35:13.821141 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:35:13.821149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:35:13.821344 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:35:13.821518 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:35:13.821635 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:35:13.821649 kernel: PCI host bridge to bus 0000:00 Jul 11 00:35:13.821787 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:35:13.821896 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:35:13.822029 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:35:13.822135 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 11 00:35:13.822238 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 11 00:35:13.822342 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 11 00:35:13.822450 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:35:13.822583 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 11 00:35:13.822726 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 11 00:35:13.822844 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 11 00:35:13.822978 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 11 00:35:13.823094 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 11 00:35:13.823212 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:35:13.823346 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 00:35:13.823461 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 11 00:35:13.823576 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 11 00:35:13.823702 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 11 00:35:13.823827 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 00:35:13.823963 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 11 00:35:13.824086 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 11 00:35:13.824224 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 11 00:35:13.824382 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 00:35:13.824518 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 11 00:35:13.824655 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 11 00:35:13.824805 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 11 00:35:13.824996 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 11 00:35:13.825143 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 11 00:35:13.825281 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:35:13.825427 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 11 00:35:13.825562 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 11 00:35:13.825707 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 11 00:35:13.825852 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 11 00:35:13.826012 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 11 00:35:13.826027 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:35:13.826038 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:35:13.826049 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:35:13.826059 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:35:13.826069 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:35:13.826079 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:35:13.826089 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:35:13.826102 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:35:13.826112 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:35:13.826122 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:35:13.826132 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:35:13.826142 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:35:13.826151 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:35:13.826161 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:35:13.826171 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:35:13.826180 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:35:13.826193 kernel: iommu: Default domain type: Translated Jul 11 00:35:13.826203 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:35:13.826212 kernel: efivars: Registered efivars operations Jul 11 00:35:13.826222 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:35:13.826232 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:35:13.826242 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 11 00:35:13.826252 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 11 00:35:13.826262 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 11 00:35:13.826272 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 11 00:35:13.826283 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 11 00:35:13.826295 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 11 00:35:13.826306 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 11 00:35:13.826316 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 11 00:35:13.826445 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:35:13.826562 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:35:13.826689 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:35:13.826701 kernel: vgaarb: loaded Jul 11 00:35:13.826712 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:35:13.826720 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:35:13.826728 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:35:13.826736 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:35:13.826744 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:35:13.826752 kernel: pnp: PnP ACPI init Jul 11 00:35:13.826879 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 11 00:35:13.826921 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:35:13.826949 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:35:13.826959 kernel: NET: Registered PF_INET protocol family Jul 11 00:35:13.826967 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:35:13.826975 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:35:13.826983 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:35:13.826991 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:35:13.827000 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:35:13.827008 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:35:13.827016 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:35:13.827026 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:35:13.827035 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:35:13.827043 kernel: NET: Registered PF_XDP protocol family Jul 11 00:35:13.827166 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 11 00:35:13.827283 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 11 00:35:13.827389 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:35:13.827493 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:35:13.827598 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:35:13.827724 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 11 00:35:13.827832 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 11 00:35:13.827952 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 11 00:35:13.827964 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:35:13.827972 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 11 00:35:13.827981 kernel: Initialise system trusted keyrings Jul 11 00:35:13.827989 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:35:13.827997 kernel: Key type asymmetric registered Jul 11 00:35:13.828005 kernel: Asymmetric key parser 'x509' registered Jul 11 00:35:13.828016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:35:13.828025 kernel: io scheduler mq-deadline registered Jul 11 00:35:13.828035 kernel: io scheduler kyber registered Jul 11 00:35:13.828043 kernel: io scheduler bfq registered Jul 11 00:35:13.828051 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:35:13.828062 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:35:13.828070 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:35:13.828078 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:35:13.828086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:35:13.828095 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:35:13.828104 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:35:13.828112 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:35:13.828120 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:35:13.828250 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:35:13.828391 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:35:13.828402 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:35:13.828508 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:35:13 UTC (1752194113) Jul 11 00:35:13.828616 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 11 00:35:13.828626 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:35:13.828634 kernel: efifb: probing for efifb Jul 11 00:35:13.828643 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 11 00:35:13.828651 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 11 00:35:13.828674 kernel: efifb: scrolling: redraw Jul 11 00:35:13.828685 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 11 00:35:13.828696 kernel: Console: switching to colour frame buffer device 160x50 Jul 11 00:35:13.828706 kernel: fb0: EFI VGA frame buffer device Jul 11 00:35:13.828717 kernel: pstore: Using crash dump compression: deflate Jul 11 00:35:13.828728 kernel: pstore: Registered efi_pstore as persistent store backend Jul 11 00:35:13.828738 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:35:13.828745 kernel: Segment Routing with IPv6 Jul 11 00:35:13.828754 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:35:13.828762 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:35:13.828772 kernel: Key type dns_resolver registered Jul 11 00:35:13.828780 kernel: IPI shorthand broadcast: enabled Jul 11 00:35:13.828788 kernel: sched_clock: Marking stable (3452002866, 171641027)->(3640534493, -16890600) Jul 11 00:35:13.828796 kernel: registered taskstats version 1 Jul 11 00:35:13.828804 kernel: Loading compiled-in X.509 certificates Jul 11 00:35:13.828813 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: e2778f992738e32ced6c6a485d2ed31f29141742' Jul 11 00:35:13.828820 kernel: Demotion targets for Node 0: null Jul 11 00:35:13.828828 kernel: Key type .fscrypt registered Jul 11 00:35:13.828836 kernel: Key type fscrypt-provisioning registered Jul 11 00:35:13.828846 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:35:13.828854 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:35:13.828862 kernel: ima: No architecture policies found Jul 11 00:35:13.828870 kernel: clk: Disabling unused clocks Jul 11 00:35:13.828878 kernel: Warning: unable to open an initial console. Jul 11 00:35:13.828886 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 11 00:35:13.828894 kernel: Write protecting the kernel read-only data: 24576k Jul 11 00:35:13.828902 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 00:35:13.828929 kernel: Run /init as init process Jul 11 00:35:13.828937 kernel: with arguments: Jul 11 00:35:13.828945 kernel: /init Jul 11 00:35:13.828953 kernel: with environment: Jul 11 00:35:13.828961 kernel: HOME=/ Jul 11 00:35:13.828968 kernel: TERM=linux Jul 11 00:35:13.828976 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:35:13.828985 systemd[1]: Successfully made /usr/ read-only. Jul 11 00:35:13.828997 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:35:13.829008 systemd[1]: Detected virtualization kvm. Jul 11 00:35:13.829017 systemd[1]: Detected architecture x86-64. Jul 11 00:35:13.829025 systemd[1]: Running in initrd. Jul 11 00:35:13.829034 systemd[1]: No hostname configured, using default hostname. Jul 11 00:35:13.829042 systemd[1]: Hostname set to . Jul 11 00:35:13.829051 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:35:13.829059 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:35:13.829070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:35:13.829079 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:35:13.829088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:35:13.829097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:35:13.829106 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:35:13.829115 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:35:13.829125 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:35:13.829136 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:35:13.829144 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:35:13.829153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:35:13.829161 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:35:13.829170 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:35:13.829178 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:35:13.829187 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:35:13.829195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:35:13.829206 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:35:13.829214 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:35:13.829223 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 00:35:13.829232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:35:13.829241 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:35:13.829270 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:35:13.829279 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:35:13.829287 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:35:13.829300 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:35:13.829314 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:35:13.829325 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 00:35:13.829336 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:35:13.829347 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:35:13.829357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:35:13.829368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:35:13.829379 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:35:13.829393 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:35:13.829404 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:35:13.829446 systemd-journald[219]: Collecting audit messages is disabled. Jul 11 00:35:13.829475 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:35:13.829486 systemd-journald[219]: Journal started Jul 11 00:35:13.829510 systemd-journald[219]: Runtime Journal (/run/log/journal/1062ff5577be4bf7ac4d54d762bd4edf) is 6M, max 48.5M, 42.4M free. Jul 11 00:35:13.817606 systemd-modules-load[220]: Inserted module 'overlay' Jul 11 00:35:13.831941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:35:13.834937 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:35:13.839104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:35:13.842140 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:35:13.843768 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:35:13.850931 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:35:13.853351 systemd-modules-load[220]: Inserted module 'br_netfilter' Jul 11 00:35:13.854437 kernel: Bridge firewalling registered Jul 11 00:35:13.857565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:35:13.861003 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:35:13.864840 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 00:35:13.867131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:35:13.870020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:35:13.877046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:35:13.880371 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:35:13.883654 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:35:13.887190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:35:13.888993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:35:13.909239 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5bb76c73bf3935f7fa0665d7beff518d75bfa5b173769c8a2e5d3c0cf9e54372 Jul 11 00:35:13.927593 systemd-resolved[260]: Positive Trust Anchors: Jul 11 00:35:13.927607 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:35:13.927637 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:35:13.930196 systemd-resolved[260]: Defaulting to hostname 'linux'. Jul 11 00:35:13.931214 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:35:13.938743 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:35:14.027943 kernel: SCSI subsystem initialized Jul 11 00:35:14.036935 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:35:14.046930 kernel: iscsi: registered transport (tcp) Jul 11 00:35:14.068930 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:35:14.068956 kernel: QLogic iSCSI HBA Driver Jul 11 00:35:14.089547 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:35:14.107034 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:35:14.110633 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:35:14.165310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:35:14.168686 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:35:14.233937 kernel: raid6: avx2x4 gen() 29230 MB/s Jul 11 00:35:14.250930 kernel: raid6: avx2x2 gen() 30276 MB/s Jul 11 00:35:14.267987 kernel: raid6: avx2x1 gen() 25047 MB/s Jul 11 00:35:14.268006 kernel: raid6: using algorithm avx2x2 gen() 30276 MB/s Jul 11 00:35:14.285993 kernel: raid6: .... xor() 19122 MB/s, rmw enabled Jul 11 00:35:14.286021 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:35:14.306933 kernel: xor: automatically using best checksumming function avx Jul 11 00:35:14.470947 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:35:14.478062 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:35:14.481499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:35:14.517288 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 11 00:35:14.522867 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:35:14.527160 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:35:14.555532 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 11 00:35:14.580685 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:35:14.584524 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:35:14.673482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:35:14.677557 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:35:14.706941 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:35:14.709428 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:35:14.713059 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:35:14.713078 kernel: GPT:9289727 != 19775487 Jul 11 00:35:14.713092 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:35:14.713106 kernel: GPT:9289727 != 19775487 Jul 11 00:35:14.713552 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:35:14.715018 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:35:14.728941 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:35:14.745938 kernel: AES CTR mode by8 optimization enabled Jul 11 00:35:14.762766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:35:14.765558 kernel: libata version 3.00 loaded. Jul 11 00:35:14.765581 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 11 00:35:14.766526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:35:14.771112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:35:14.775256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:35:14.790954 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:35:14.791932 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:35:14.794016 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 11 00:35:14.794225 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 11 00:35:14.794405 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:35:14.797070 kernel: scsi host0: ahci Jul 11 00:35:14.797268 kernel: scsi host1: ahci Jul 11 00:35:14.797924 kernel: scsi host2: ahci Jul 11 00:35:14.798090 kernel: scsi host3: ahci Jul 11 00:35:14.798954 kernel: scsi host4: ahci Jul 11 00:35:14.800678 kernel: scsi host5: ahci Jul 11 00:35:14.800850 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 11 00:35:14.800862 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 11 00:35:14.802626 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 11 00:35:14.802662 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 11 00:35:14.804825 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 11 00:35:14.804852 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 11 00:35:14.811762 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:35:14.829723 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:35:14.831226 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:35:14.844047 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:35:14.854425 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:35:14.857316 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:35:14.858664 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:35:14.858725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:35:14.863678 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:35:14.874496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:35:14.877322 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:35:14.882888 disk-uuid[638]: Primary Header is updated. Jul 11 00:35:14.882888 disk-uuid[638]: Secondary Entries is updated. Jul 11 00:35:14.882888 disk-uuid[638]: Secondary Header is updated. Jul 11 00:35:14.886939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:35:14.890954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:35:14.897309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:35:15.111949 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:35:15.112009 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:35:15.112945 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:35:15.113934 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:35:15.113960 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:35:15.114934 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:35:15.115933 kernel: ata3.00: applying bridge limits Jul 11 00:35:15.115949 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:35:15.116950 kernel: ata3.00: configured for UDMA/100 Jul 11 00:35:15.117968 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:35:15.189471 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:35:15.189753 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:35:15.210246 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:35:15.639097 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:35:15.639764 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:35:15.640124 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:35:15.640519 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:35:15.641839 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:35:15.676055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:35:15.894740 disk-uuid[640]: The operation has completed successfully. Jul 11 00:35:15.896085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:35:15.926166 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:35:15.926345 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:35:15.967714 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:35:15.998240 sh[672]: Success Jul 11 00:35:16.017796 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:35:16.017860 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:35:16.017882 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 00:35:16.028981 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 11 00:35:16.061572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:35:16.065415 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:35:16.081671 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:35:16.089549 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 00:35:16.089580 kernel: BTRFS: device fsid 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (684) Jul 11 00:35:16.090819 kernel: BTRFS info (device dm-0): first mount of filesystem 3f9b7830-c6a3-4ecb-9c03-fbe92ab5c328 Jul 11 00:35:16.090840 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:35:16.091683 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 00:35:16.096476 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:35:16.097987 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:35:16.099681 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:35:16.100574 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:35:16.102623 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:35:16.132994 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (720) Jul 11 00:35:16.135198 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:35:16.135227 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:35:16.135244 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:35:16.142950 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:35:16.143801 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:35:16.147102 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:35:16.241002 ignition[762]: Ignition 2.21.0 Jul 11 00:35:16.241015 ignition[762]: Stage: fetch-offline Jul 11 00:35:16.241054 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:35:16.241063 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:35:16.241144 ignition[762]: parsed url from cmdline: "" Jul 11 00:35:16.241148 ignition[762]: no config URL provided Jul 11 00:35:16.241153 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:35:16.241161 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:35:16.241184 ignition[762]: op(1): [started] loading QEMU firmware config module Jul 11 00:35:16.241189 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:35:16.249851 ignition[762]: op(1): [finished] loading QEMU firmware config module Jul 11 00:35:16.250508 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:35:16.254009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:35:16.292164 ignition[762]: parsing config with SHA512: 9214a2bfef4b5878f0f81f7ae969fbb2ef77aec82a4aa1f5ec745b5099c15e1cfef668ee9f0c414117d386622fb43187319a4084b266d3208785ca10fa9ee6c2 Jul 11 00:35:16.295427 unknown[762]: fetched base config from "system" Jul 11 00:35:16.295637 unknown[762]: fetched user config from "qemu" Jul 11 00:35:16.295967 ignition[762]: fetch-offline: fetch-offline passed Jul 11 00:35:16.296018 ignition[762]: Ignition finished successfully Jul 11 00:35:16.301064 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:35:16.301442 systemd-networkd[861]: lo: Link UP Jul 11 00:35:16.301452 systemd-networkd[861]: lo: Gained carrier Jul 11 00:35:16.303236 systemd-networkd[861]: Enumeration completed Jul 11 00:35:16.303465 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:35:16.303640 systemd-networkd[861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:35:16.303644 systemd-networkd[861]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:35:16.304677 systemd-networkd[861]: eth0: Link UP Jul 11 00:35:16.304681 systemd-networkd[861]: eth0: Gained carrier Jul 11 00:35:16.304689 systemd-networkd[861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:35:16.305558 systemd[1]: Reached target network.target - Network. Jul 11 00:35:16.307201 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:35:16.311151 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:35:16.323949 systemd-networkd[861]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:35:16.349042 ignition[865]: Ignition 2.21.0 Jul 11 00:35:16.349056 ignition[865]: Stage: kargs Jul 11 00:35:16.349182 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:35:16.349192 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:35:16.350299 ignition[865]: kargs: kargs passed Jul 11 00:35:16.350358 ignition[865]: Ignition finished successfully Jul 11 00:35:16.355505 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:35:16.358603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:35:16.394169 ignition[874]: Ignition 2.21.0 Jul 11 00:35:16.394183 ignition[874]: Stage: disks Jul 11 00:35:16.394336 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:35:16.394348 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:35:16.394997 ignition[874]: disks: disks passed Jul 11 00:35:16.395040 ignition[874]: Ignition finished successfully Jul 11 00:35:16.398805 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:35:16.400364 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:35:16.402133 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:35:16.402369 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:35:16.402700 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:35:16.403178 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:35:16.404439 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:35:16.438538 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 00:35:16.446281 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:35:16.447672 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:35:16.559940 kernel: EXT4-fs (vda9): mounted filesystem b9a26173-6c72-4a5b-b1cb-ad71b806f75e r/w with ordered data mode. Quota mode: none. Jul 11 00:35:16.560370 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:35:16.561020 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:35:16.564556 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:35:16.566299 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:35:16.566659 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:35:16.566704 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:35:16.566728 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:35:16.597083 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:35:16.599472 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:35:16.604936 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Jul 11 00:35:16.606981 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:35:16.607008 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:35:16.608937 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:35:16.613731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:35:16.639400 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:35:16.643831 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:35:16.648606 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:35:16.652341 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:35:16.736594 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:35:16.739813 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:35:16.742478 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:35:16.759943 kernel: BTRFS info (device vda6): last unmount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:35:16.773081 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:35:16.787796 ignition[1005]: INFO : Ignition 2.21.0 Jul 11 00:35:16.787796 ignition[1005]: INFO : Stage: mount Jul 11 00:35:16.789692 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:35:16.789692 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:35:16.793717 ignition[1005]: INFO : mount: mount passed Jul 11 00:35:16.794499 ignition[1005]: INFO : Ignition finished successfully Jul 11 00:35:16.797991 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:35:16.800258 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:35:17.088765 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:35:17.090335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:35:17.113360 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Jul 11 00:35:17.113397 kernel: BTRFS info (device vda6): first mount of filesystem 047d5cfa-d847-4e53-8f92-c8766cefdad0 Jul 11 00:35:17.114427 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:35:17.114454 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 00:35:17.118158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:35:17.149775 ignition[1034]: INFO : Ignition 2.21.0 Jul 11 00:35:17.149775 ignition[1034]: INFO : Stage: files Jul 11 00:35:17.151740 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:35:17.151740 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:35:17.154975 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:35:17.156304 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:35:17.156304 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:35:17.159532 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:35:17.159532 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:35:17.159532 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:35:17.158537 unknown[1034]: wrote ssh authorized keys file for user: core Jul 11 00:35:17.164485 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:35:17.164485 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 11 00:35:17.221724 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:35:17.482672 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:35:17.482672 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:35:17.486621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:35:17.486621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:35:17.486621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:35:17.486621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:35:17.486621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:35:17.486621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:35:17.486621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:35:17.498494 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:35:17.498494 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:35:17.498494 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:35:17.498494 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:35:17.498494 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:35:17.498494 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 11 00:35:18.137137 systemd-networkd[861]: eth0: Gained IPv6LL Jul 11 00:35:19.967304 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:20.167832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #2 Jul 11 00:35:22.665694 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:23.066315 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #3 Jul 11 00:35:25.562750 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:26.363851 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #4 Jul 11 00:35:28.966615 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:30.566802 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #5 Jul 11 00:35:32.882643 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:36.082932 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #6 Jul 11 00:35:38.510864 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:43.515080 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #7 Jul 11 00:35:45.928054 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:50.932335 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #8 Jul 11 00:35:53.442251 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:35:58.442540 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #9 Jul 11 00:36:00.866387 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: Service Unavailable Jul 11 00:36:05.870518 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #10 Jul 11 00:36:08.632930 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:36:08.977125 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:36:08.977125 ignition[1034]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:36:08.980941 ignition[1034]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:36:09.062010 ignition[1034]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:36:09.062010 ignition[1034]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:36:09.062010 ignition[1034]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:36:09.066905 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:36:09.066905 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:36:09.066905 ignition[1034]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:36:09.066905 ignition[1034]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:36:09.088070 ignition[1034]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:36:09.091993 ignition[1034]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:36:09.093595 ignition[1034]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:36:09.093595 ignition[1034]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:36:09.093595 ignition[1034]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:36:09.093595 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:36:09.093595 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:36:09.093595 ignition[1034]: INFO : files: files passed Jul 11 00:36:09.093595 ignition[1034]: INFO : Ignition finished successfully Jul 11 00:36:09.103120 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:36:09.128594 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:36:09.130818 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:36:09.147080 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:36:09.147217 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:36:09.149552 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:36:09.155676 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:36:09.155676 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:36:09.159418 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:36:09.162382 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:36:09.162620 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:36:09.166015 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:36:09.203552 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:36:09.203679 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:36:09.205074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:36:09.207671 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:36:09.209987 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:36:09.210733 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:36:09.229826 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:36:09.232469 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:36:09.252554 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:36:09.252699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:36:09.255300 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:36:09.257678 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:36:09.257785 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:36:09.261835 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:36:09.263173 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:36:09.265364 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:36:09.267209 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:36:09.267544 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:36:09.267863 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 00:36:09.268361 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:36:09.268685 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:36:09.269187 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:36:09.269508 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:36:09.269822 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:36:09.270288 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:36:09.270391 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:36:09.287057 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:36:09.287202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:36:09.289357 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:36:09.291458 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:36:09.292416 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:36:09.292521 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:36:09.295229 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:36:09.295336 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:36:09.295652 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:36:09.295893 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:36:09.299958 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:36:09.301142 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:36:09.301455 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:36:09.301774 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:36:09.301860 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:36:09.307703 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:36:09.307788 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:36:09.309593 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:36:09.309705 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:36:09.311308 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:36:09.311407 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:36:09.316678 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:36:09.318625 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:36:09.318769 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:36:09.321210 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:36:09.322353 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:36:09.322465 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:36:09.325961 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:36:09.326068 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:36:09.330923 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:36:09.331032 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:36:09.345412 ignition[1090]: INFO : Ignition 2.21.0 Jul 11 00:36:09.345412 ignition[1090]: INFO : Stage: umount Jul 11 00:36:09.347263 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:36:09.347263 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:36:09.349776 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:36:09.350777 ignition[1090]: INFO : umount: umount passed Jul 11 00:36:09.351546 ignition[1090]: INFO : Ignition finished successfully Jul 11 00:36:09.355230 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:36:09.355357 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:36:09.356590 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:36:09.356711 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:36:09.358626 systemd[1]: Stopped target network.target - Network. Jul 11 00:36:09.359924 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:36:09.359994 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:36:09.361673 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:36:09.361723 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:36:09.365591 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:36:09.365696 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:36:09.367699 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:36:09.367759 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:36:09.368729 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:36:09.368794 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:36:09.369514 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:36:09.373695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:36:09.382599 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:36:09.382745 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:36:09.387710 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 00:36:09.387993 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:36:09.388143 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:36:09.393473 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 00:36:09.394645 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 00:36:09.394992 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:36:09.395063 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:36:09.399351 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:36:09.400409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:36:09.400478 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:36:09.402423 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:36:09.402469 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:36:09.407233 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:36:09.407289 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:36:09.408308 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:36:09.408366 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:36:09.412194 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:36:09.414275 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:36:09.414349 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:36:09.428499 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:36:09.428633 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:36:09.430569 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:36:09.430750 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:36:09.432744 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:36:09.432791 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:36:09.433666 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:36:09.433705 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:36:09.434127 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:36:09.434181 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:36:09.434793 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:36:09.434840 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:36:09.435578 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:36:09.435631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:36:09.445593 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:36:09.446546 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 00:36:09.446615 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:36:09.450757 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:36:09.450825 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:36:09.454076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:36:09.454155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:36:09.459166 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 00:36:09.459240 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 00:36:09.459306 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 00:36:09.473537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:36:09.473697 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:36:09.475046 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:36:09.477932 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:36:09.511145 systemd[1]: Switching root. Jul 11 00:36:09.557070 systemd-journald[219]: Journal stopped Jul 11 00:36:10.903680 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 11 00:36:10.903751 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:36:10.903769 kernel: SELinux: policy capability open_perms=1 Jul 11 00:36:10.903780 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:36:10.903791 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:36:10.903802 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:36:10.903813 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:36:10.903827 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:36:10.903838 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:36:10.903849 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 00:36:10.903860 kernel: audit: type=1403 audit(1752194170.157:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:36:10.903872 systemd[1]: Successfully loaded SELinux policy in 47.158ms. Jul 11 00:36:10.903895 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.545ms. Jul 11 00:36:10.903924 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 00:36:10.903939 systemd[1]: Detected virtualization kvm. Jul 11 00:36:10.903954 systemd[1]: Detected architecture x86-64. Jul 11 00:36:10.903966 systemd[1]: Detected first boot. Jul 11 00:36:10.903978 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:36:10.903990 zram_generator::config[1136]: No configuration found. Jul 11 00:36:10.904003 kernel: Guest personality initialized and is inactive Jul 11 00:36:10.904014 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 00:36:10.904025 kernel: Initialized host personality Jul 11 00:36:10.904037 kernel: NET: Registered PF_VSOCK protocol family Jul 11 00:36:10.904056 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:36:10.904071 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 00:36:10.904084 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:36:10.904096 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:36:10.904108 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:36:10.904120 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:36:10.904132 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:36:10.904148 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:36:10.904160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:36:10.904179 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:36:10.904193 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:36:10.904205 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:36:10.904217 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:36:10.904228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:36:10.904240 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:36:10.904253 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:36:10.904264 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:36:10.904277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:36:10.904291 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:36:10.904308 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:36:10.904321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:36:10.904333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:36:10.904345 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:36:10.904356 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:36:10.904368 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:36:10.904380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:36:10.904395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:36:10.904407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:36:10.904418 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:36:10.904430 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:36:10.904442 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:36:10.904454 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:36:10.904466 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 00:36:10.904478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:36:10.904489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:36:10.904503 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:36:10.904515 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:36:10.904527 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:36:10.904538 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:36:10.904550 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:36:10.904562 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:10.904574 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:36:10.904586 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:36:10.904598 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:36:10.904612 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:36:10.904624 systemd[1]: Reached target machines.target - Containers. Jul 11 00:36:10.904636 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:36:10.904648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:36:10.904660 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:36:10.904672 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:36:10.904684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:36:10.904695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:36:10.904708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:36:10.904720 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:36:10.904732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:36:10.904744 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:36:10.904755 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:36:10.904767 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:36:10.904779 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:36:10.904790 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:36:10.904803 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:36:10.904816 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:36:10.904830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:36:10.904841 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:36:10.904853 kernel: fuse: init (API version 7.41) Jul 11 00:36:10.904866 kernel: loop: module loaded Jul 11 00:36:10.904877 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:36:10.904889 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 00:36:10.904901 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:36:10.904944 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:36:10.904957 systemd[1]: Stopped verity-setup.service. Jul 11 00:36:10.904973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:10.904985 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:36:10.905016 systemd-journald[1207]: Collecting audit messages is disabled. Jul 11 00:36:10.905039 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:36:10.905060 systemd-journald[1207]: Journal started Jul 11 00:36:10.905082 systemd-journald[1207]: Runtime Journal (/run/log/journal/1062ff5577be4bf7ac4d54d762bd4edf) is 6M, max 48.5M, 42.4M free. Jul 11 00:36:10.671969 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:36:10.694295 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:36:10.694767 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:36:10.906970 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:36:10.907858 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:36:10.910424 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:36:10.911736 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:36:10.913091 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:36:10.922987 kernel: ACPI: bus type drm_connector registered Jul 11 00:36:10.927094 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:36:10.928762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:36:10.930323 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:36:10.930563 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:36:10.932059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:36:10.932313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:36:10.933735 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:36:10.933995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:36:10.935437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:36:10.935663 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:36:10.937199 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:36:10.937430 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:36:10.938781 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:36:10.939031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:36:10.940442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:36:10.941846 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:36:10.943403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:36:10.945131 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 00:36:10.960244 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:36:10.962812 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:36:10.965282 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:36:10.966495 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:36:10.966576 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:36:10.968744 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 00:36:10.977013 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:36:10.978302 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:36:10.980407 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:36:10.984011 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:36:10.985584 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:36:10.987459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:36:10.988667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:36:10.989708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:36:10.992153 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:36:10.998423 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:36:11.002704 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:36:11.004207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:36:11.021144 systemd-journald[1207]: Time spent on flushing to /var/log/journal/1062ff5577be4bf7ac4d54d762bd4edf is 21.827ms for 1086 entries. Jul 11 00:36:11.021144 systemd-journald[1207]: System Journal (/var/log/journal/1062ff5577be4bf7ac4d54d762bd4edf) is 8M, max 195.6M, 187.6M free. Jul 11 00:36:11.059561 systemd-journald[1207]: Received client request to flush runtime journal. Jul 11 00:36:11.059608 kernel: loop0: detected capacity change from 0 to 146240 Jul 11 00:36:11.059681 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:36:11.026832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:36:11.030541 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:36:11.033467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:36:11.036635 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:36:11.039380 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 00:36:11.054985 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:36:11.058121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:36:11.062565 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:36:11.077245 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 00:36:11.085941 kernel: loop1: detected capacity change from 0 to 113872 Jul 11 00:36:11.087470 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 11 00:36:11.087486 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 11 00:36:11.094263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:36:11.112973 kernel: loop2: detected capacity change from 0 to 224512 Jul 11 00:36:11.144973 kernel: loop3: detected capacity change from 0 to 146240 Jul 11 00:36:11.161055 kernel: loop4: detected capacity change from 0 to 113872 Jul 11 00:36:11.171946 kernel: loop5: detected capacity change from 0 to 224512 Jul 11 00:36:11.183604 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:36:11.184205 (sd-merge)[1278]: Merged extensions into '/usr'. Jul 11 00:36:11.188388 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:36:11.188403 systemd[1]: Reloading... Jul 11 00:36:11.225949 zram_generator::config[1302]: No configuration found. Jul 11 00:36:11.307475 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:36:11.349522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:36:11.430109 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:36:11.430681 systemd[1]: Reloading finished in 241 ms. Jul 11 00:36:11.458325 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:36:11.459946 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:36:11.477234 systemd[1]: Starting ensure-sysext.service... Jul 11 00:36:11.479042 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:36:11.488172 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:36:11.488184 systemd[1]: Reloading... Jul 11 00:36:11.524214 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 00:36:11.524255 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 00:36:11.524556 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:36:11.524834 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:36:11.527219 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:36:11.527552 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 11 00:36:11.527669 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 11 00:36:11.533765 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:36:11.533845 systemd-tmpfiles[1342]: Skipping /boot Jul 11 00:36:11.541954 zram_generator::config[1372]: No configuration found. Jul 11 00:36:11.547210 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:36:11.547225 systemd-tmpfiles[1342]: Skipping /boot Jul 11 00:36:11.622698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:36:11.702666 systemd[1]: Reloading finished in 214 ms. Jul 11 00:36:11.728311 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:36:11.753476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:36:11.762060 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:36:11.764403 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:36:11.779356 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:36:11.782503 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:36:11.786070 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:36:11.789220 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:36:11.794843 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:11.795063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:36:11.797271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:36:11.799630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:36:11.803082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:36:11.804298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:36:11.804408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:36:11.807096 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:36:11.808180 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:11.809722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:36:11.809988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:36:11.811678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:36:11.811887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:36:11.813723 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:36:11.813945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:36:11.815835 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:36:11.826567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:11.826793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:36:11.828392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:36:11.832090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:36:11.835266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:36:11.836510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:36:11.836627 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:36:11.841009 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:36:11.843427 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Jul 11 00:36:11.843773 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:11.845111 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:36:11.846849 augenrules[1445]: No rules Jul 11 00:36:11.847126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:36:11.849240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:36:11.851303 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:36:11.851538 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:36:11.853279 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:36:11.853501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:36:11.855724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:36:11.856272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:36:11.867589 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:36:11.869991 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:36:11.871889 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:36:11.882303 systemd[1]: Finished ensure-sysext.service. Jul 11 00:36:11.884687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:36:11.888247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:11.891045 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:36:11.892102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:36:11.894059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:36:11.900074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:36:11.906086 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:36:11.908976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:36:11.910103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:36:11.910150 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 00:36:11.913112 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:36:11.919189 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:36:11.920479 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:36:11.920510 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:36:11.925649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:36:11.926871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:36:11.928851 augenrules[1471]: /sbin/augenrules: No change Jul 11 00:36:11.928651 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:36:11.929158 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:36:11.931254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:36:11.931491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:36:11.934828 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:36:11.935573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:36:11.940247 augenrules[1513]: No rules Jul 11 00:36:11.943124 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:36:11.943659 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:36:11.953647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:36:11.953709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:36:11.974767 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:36:11.994001 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:36:11.998089 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:36:12.031935 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:36:12.034938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 00:36:12.037265 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:36:12.044966 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:36:12.073964 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 11 00:36:12.074224 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:36:12.075584 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:36:12.094738 systemd-resolved[1411]: Positive Trust Anchors: Jul 11 00:36:12.094759 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:36:12.094791 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:36:12.098443 systemd-resolved[1411]: Defaulting to hostname 'linux'. Jul 11 00:36:12.100985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:36:12.102247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:36:12.138962 systemd-networkd[1492]: lo: Link UP Jul 11 00:36:12.138973 systemd-networkd[1492]: lo: Gained carrier Jul 11 00:36:12.142031 systemd-networkd[1492]: Enumeration completed Jul 11 00:36:12.142415 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:36:12.142420 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:36:12.142504 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:36:12.144055 systemd[1]: Reached target network.target - Network. Jul 11 00:36:12.146282 systemd-networkd[1492]: eth0: Link UP Jul 11 00:36:12.146429 systemd-networkd[1492]: eth0: Gained carrier Jul 11 00:36:12.146443 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:36:12.148355 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 00:36:12.151153 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:36:12.168226 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:36:12.169508 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:36:12.170957 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:36:12.172931 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:36:12.174159 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 00:36:12.175285 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:36:12.176541 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:36:12.176566 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:36:12.177461 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:36:12.178616 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:36:12.180078 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:36:12.181298 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:36:12.182920 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:36:12.186585 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:36:12.190657 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 00:36:12.192019 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 00:36:12.193785 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 00:36:12.194972 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:36:12.196387 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. Jul 11 00:36:14.048652 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:36:14.048706 systemd-timesyncd[1498]: Initial clock synchronization to Fri 2025-07-11 00:36:14.048584 UTC. Jul 11 00:36:14.053066 systemd-resolved[1411]: Clock change detected. Flushing caches. Jul 11 00:36:14.054351 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:36:14.056471 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 00:36:14.062362 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 00:36:14.063985 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:36:14.070767 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:36:14.072045 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:36:14.073081 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:36:14.073108 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:36:14.075425 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:36:14.077793 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:36:14.080471 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:36:14.083176 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:36:14.094269 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:36:14.097303 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:36:14.098410 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 00:36:14.100818 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:36:14.105257 jq[1557]: false Jul 11 00:36:14.104395 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:36:14.109617 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:36:14.112852 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:36:14.120260 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jul 11 00:36:14.118806 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Jul 11 00:36:14.122578 extend-filesystems[1560]: Found /dev/vda6 Jul 11 00:36:14.125446 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:36:14.129545 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Jul 11 00:36:14.129545 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:36:14.129545 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Jul 11 00:36:14.128230 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:36:14.127696 oslogin_cache_refresh[1561]: Failure getting users, quitting Jul 11 00:36:14.128863 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:36:14.127710 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 00:36:14.129483 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:36:14.127751 oslogin_cache_refresh[1561]: Refreshing group entry cache Jul 11 00:36:14.133748 extend-filesystems[1560]: Found /dev/vda9 Jul 11 00:36:14.133353 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:36:14.138393 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Jul 11 00:36:14.138393 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:36:14.132702 oslogin_cache_refresh[1561]: Failure getting groups, quitting Jul 11 00:36:14.132712 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 00:36:14.147457 extend-filesystems[1560]: Checking size of /dev/vda9 Jul 11 00:36:14.148021 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:36:14.158013 jq[1581]: true Jul 11 00:36:14.159376 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:36:14.164482 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:36:14.164880 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 00:36:14.165128 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 00:36:14.166492 extend-filesystems[1560]: Resized partition /dev/vda9 Jul 11 00:36:14.170657 kernel: kvm_amd: TSC scaling supported Jul 11 00:36:14.170683 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:36:14.170696 kernel: kvm_amd: Nested Paging enabled Jul 11 00:36:14.170710 kernel: kvm_amd: LBR virtualization supported Jul 11 00:36:14.170737 extend-filesystems[1590]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 00:36:14.174917 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:36:14.174955 kernel: kvm_amd: Virtual GIF supported Jul 11 00:36:14.174979 update_engine[1579]: I20250711 00:36:14.170290 1579 main.cc:92] Flatcar Update Engine starting Jul 11 00:36:14.168802 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:36:14.169074 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:36:14.177992 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:36:14.177304 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:36:14.177900 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:36:14.198284 jq[1592]: true Jul 11 00:36:14.205722 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:36:14.210620 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:36:14.221567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:36:14.228155 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:36:14.228155 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:36:14.228155 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:36:14.248074 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Jul 11 00:36:14.232570 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:36:14.234025 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:36:14.249968 tar[1591]: linux-amd64/LICENSE Jul 11 00:36:14.250193 tar[1591]: linux-amd64/helm Jul 11 00:36:14.267407 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:36:14.269907 systemd-logind[1578]: New seat seat0. Jul 11 00:36:14.270366 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:36:14.273513 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:36:14.277215 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:36:14.279513 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:36:14.280186 dbus-daemon[1555]: [system] SELinux support is enabled Jul 11 00:36:14.282357 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:36:14.286292 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:36:14.286667 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:36:14.288601 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:36:14.288614 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:36:14.291284 update_engine[1579]: I20250711 00:36:14.291215 1579 update_check_scheduler.cc:74] Next update check in 11m58s Jul 11 00:36:14.293884 systemd-logind[1578]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 00:36:14.302040 dbus-daemon[1555]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 11 00:36:14.305957 sshd_keygen[1585]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:36:14.313569 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:36:14.317509 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:36:14.328299 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:36:14.356989 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:36:14.359548 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:36:14.371396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:36:14.377444 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:36:14.377846 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:36:14.383078 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:36:14.386565 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:36:14.402086 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:36:14.405532 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:36:14.409618 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:36:14.410920 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:36:14.447230 containerd[1593]: time="2025-07-11T00:36:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 00:36:14.448100 containerd[1593]: time="2025-07-11T00:36:14.448057063Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 11 00:36:14.456412 containerd[1593]: time="2025-07-11T00:36:14.456359369Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.808µs" Jul 11 00:36:14.456412 containerd[1593]: time="2025-07-11T00:36:14.456400396Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 00:36:14.456467 containerd[1593]: time="2025-07-11T00:36:14.456418821Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 00:36:14.456646 containerd[1593]: time="2025-07-11T00:36:14.456620559Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 00:36:14.456646 containerd[1593]: time="2025-07-11T00:36:14.456639895Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 00:36:14.456700 containerd[1593]: time="2025-07-11T00:36:14.456666355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:36:14.456746 containerd[1593]: time="2025-07-11T00:36:14.456726728Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 00:36:14.456746 containerd[1593]: time="2025-07-11T00:36:14.456741125Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457062 containerd[1593]: time="2025-07-11T00:36:14.457032041Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457062 containerd[1593]: time="2025-07-11T00:36:14.457051167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457062 containerd[1593]: time="2025-07-11T00:36:14.457060975Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457124 containerd[1593]: time="2025-07-11T00:36:14.457069401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457169 containerd[1593]: time="2025-07-11T00:36:14.457151966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457429 containerd[1593]: time="2025-07-11T00:36:14.457400893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457463 containerd[1593]: time="2025-07-11T00:36:14.457436179Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 00:36:14.457463 containerd[1593]: time="2025-07-11T00:36:14.457446799Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 00:36:14.457501 containerd[1593]: time="2025-07-11T00:36:14.457483889Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 00:36:14.458146 containerd[1593]: time="2025-07-11T00:36:14.458117017Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 00:36:14.458493 containerd[1593]: time="2025-07-11T00:36:14.458452466Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:36:14.495618 containerd[1593]: time="2025-07-11T00:36:14.495587824Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 00:36:14.495690 containerd[1593]: time="2025-07-11T00:36:14.495631046Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 00:36:14.495690 containerd[1593]: time="2025-07-11T00:36:14.495646014Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 00:36:14.495690 containerd[1593]: time="2025-07-11T00:36:14.495658307Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 00:36:14.495690 containerd[1593]: time="2025-07-11T00:36:14.495671471Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 00:36:14.495690 containerd[1593]: time="2025-07-11T00:36:14.495683284Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 00:36:14.495779 containerd[1593]: time="2025-07-11T00:36:14.495694114Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 00:36:14.495779 containerd[1593]: time="2025-07-11T00:36:14.495707779Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 00:36:14.495779 containerd[1593]: time="2025-07-11T00:36:14.495718069Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 00:36:14.495779 containerd[1593]: time="2025-07-11T00:36:14.495727236Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 00:36:14.495779 containerd[1593]: time="2025-07-11T00:36:14.495736433Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 00:36:14.495779 containerd[1593]: time="2025-07-11T00:36:14.495748245Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 00:36:14.495885 containerd[1593]: time="2025-07-11T00:36:14.495862400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 00:36:14.495905 containerd[1593]: time="2025-07-11T00:36:14.495886535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 00:36:14.495905 containerd[1593]: time="2025-07-11T00:36:14.495900701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 00:36:14.495945 containerd[1593]: time="2025-07-11T00:36:14.495911191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 00:36:14.495945 containerd[1593]: time="2025-07-11T00:36:14.495922062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 00:36:14.495945 containerd[1593]: time="2025-07-11T00:36:14.495932571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 00:36:14.495945 containerd[1593]: time="2025-07-11T00:36:14.495943792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 00:36:14.496027 containerd[1593]: time="2025-07-11T00:36:14.495953931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 00:36:14.496027 containerd[1593]: time="2025-07-11T00:36:14.495965042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 00:36:14.496027 containerd[1593]: time="2025-07-11T00:36:14.495979609Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 00:36:14.496027 containerd[1593]: time="2025-07-11T00:36:14.495997413Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 00:36:14.496096 containerd[1593]: time="2025-07-11T00:36:14.496057706Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 00:36:14.496096 containerd[1593]: time="2025-07-11T00:36:14.496070199Z" level=info msg="Start snapshots syncer" Jul 11 00:36:14.496096 containerd[1593]: time="2025-07-11T00:36:14.496093092Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 00:36:14.496350 containerd[1593]: time="2025-07-11T00:36:14.496308406Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 00:36:14.496458 containerd[1593]: time="2025-07-11T00:36:14.496367537Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 00:36:14.497134 containerd[1593]: time="2025-07-11T00:36:14.497103799Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 00:36:14.497254 containerd[1593]: time="2025-07-11T00:36:14.497203325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 00:36:14.497346 containerd[1593]: time="2025-07-11T00:36:14.497231077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 00:36:14.497422 containerd[1593]: time="2025-07-11T00:36:14.497392370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 00:36:14.497422 containerd[1593]: time="2025-07-11T00:36:14.497409953Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 00:36:14.497476 containerd[1593]: time="2025-07-11T00:36:14.497423098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 00:36:14.497476 containerd[1593]: time="2025-07-11T00:36:14.497440931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 00:36:14.497476 containerd[1593]: time="2025-07-11T00:36:14.497451541Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 00:36:14.497544 containerd[1593]: time="2025-07-11T00:36:14.497477309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 00:36:14.497544 containerd[1593]: time="2025-07-11T00:36:14.497489041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 00:36:14.497544 containerd[1593]: time="2025-07-11T00:36:14.497499631Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 00:36:14.497544 containerd[1593]: time="2025-07-11T00:36:14.497540428Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497552881Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497561517Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497569883Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497577718Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497586414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497596723Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497614437Z" level=info msg="runtime interface created" Jul 11 00:36:14.497617 containerd[1593]: time="2025-07-11T00:36:14.497619987Z" level=info msg="created NRI interface" Jul 11 00:36:14.497758 containerd[1593]: time="2025-07-11T00:36:14.497627962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 00:36:14.497758 containerd[1593]: time="2025-07-11T00:36:14.497638502Z" level=info msg="Connect containerd service" Jul 11 00:36:14.497758 containerd[1593]: time="2025-07-11T00:36:14.497660022Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:36:14.498430 containerd[1593]: time="2025-07-11T00:36:14.498405130Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:36:14.585740 containerd[1593]: time="2025-07-11T00:36:14.585691205Z" level=info msg="Start subscribing containerd event" Jul 11 00:36:14.585941 containerd[1593]: time="2025-07-11T00:36:14.585913191Z" level=info msg="Start recovering state" Jul 11 00:36:14.586091 containerd[1593]: time="2025-07-11T00:36:14.585818985Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:36:14.586151 containerd[1593]: time="2025-07-11T00:36:14.586053454Z" level=info msg="Start event monitor" Jul 11 00:36:14.586215 containerd[1593]: time="2025-07-11T00:36:14.586153823Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:36:14.586292 containerd[1593]: time="2025-07-11T00:36:14.586190091Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:36:14.586292 containerd[1593]: time="2025-07-11T00:36:14.586226589Z" level=info msg="Start streaming server" Jul 11 00:36:14.586292 containerd[1593]: time="2025-07-11T00:36:14.586261645Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 00:36:14.586292 containerd[1593]: time="2025-07-11T00:36:14.586271183Z" level=info msg="runtime interface starting up..." Jul 11 00:36:14.586292 containerd[1593]: time="2025-07-11T00:36:14.586279188Z" level=info msg="starting plugins..." Jul 11 00:36:14.586398 containerd[1593]: time="2025-07-11T00:36:14.586302271Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 00:36:14.586545 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:36:14.586697 containerd[1593]: time="2025-07-11T00:36:14.586675241Z" level=info msg="containerd successfully booted in 0.139977s" Jul 11 00:36:14.698421 tar[1591]: linux-amd64/README.md Jul 11 00:36:14.717946 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:36:15.924466 systemd-networkd[1492]: eth0: Gained IPv6LL Jul 11 00:36:15.927438 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:36:15.929335 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:36:15.931897 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:36:15.934484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:36:15.946834 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:36:15.965081 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:36:15.965442 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:36:15.967490 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:36:15.971569 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:36:16.651537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:16.653622 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:36:16.655328 systemd[1]: Startup finished in 3.508s (kernel) + 56.535s (initrd) + 4.691s (userspace) = 1min 4.735s. Jul 11 00:36:16.663670 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:36:17.076073 kubelet[1703]: E0711 00:36:17.076005 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:36:17.079423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:36:17.079619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:36:17.079993 systemd[1]: kubelet.service: Consumed 974ms CPU time, 265.3M memory peak. Jul 11 00:36:18.203337 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:36:18.204506 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:52290.service - OpenSSH per-connection server daemon (10.0.0.1:52290). Jul 11 00:36:18.262511 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 52290 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:36:18.264256 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:36:18.270176 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:36:18.271194 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:36:18.277963 systemd-logind[1578]: New session 1 of user core. Jul 11 00:36:18.290905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:36:18.293982 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:36:18.309498 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:36:18.311565 systemd-logind[1578]: New session c1 of user core. Jul 11 00:36:18.463897 systemd[1720]: Queued start job for default target default.target. Jul 11 00:36:18.471497 systemd[1720]: Created slice app.slice - User Application Slice. Jul 11 00:36:18.471525 systemd[1720]: Reached target paths.target - Paths. Jul 11 00:36:18.471573 systemd[1720]: Reached target timers.target - Timers. Jul 11 00:36:18.472985 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:36:18.482920 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:36:18.482987 systemd[1720]: Reached target sockets.target - Sockets. Jul 11 00:36:18.483029 systemd[1720]: Reached target basic.target - Basic System. Jul 11 00:36:18.483079 systemd[1720]: Reached target default.target - Main User Target. Jul 11 00:36:18.483117 systemd[1720]: Startup finished in 165ms. Jul 11 00:36:18.483363 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:36:18.484833 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:36:18.554255 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:52300.service - OpenSSH per-connection server daemon (10.0.0.1:52300). Jul 11 00:36:18.615043 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 52300 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:36:18.616472 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:36:18.620360 systemd-logind[1578]: New session 2 of user core. Jul 11 00:36:18.630376 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:36:18.682358 sshd[1733]: Connection closed by 10.0.0.1 port 52300 Jul 11 00:36:18.682594 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jul 11 00:36:18.694498 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:52300.service: Deactivated successfully. Jul 11 00:36:18.695977 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:36:18.696658 systemd-logind[1578]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:36:18.698997 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:52310.service - OpenSSH per-connection server daemon (10.0.0.1:52310). Jul 11 00:36:18.699558 systemd-logind[1578]: Removed session 2. Jul 11 00:36:18.743931 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 52310 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:36:18.745276 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:36:18.749123 systemd-logind[1578]: New session 3 of user core. Jul 11 00:36:18.761353 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:36:18.810023 sshd[1742]: Connection closed by 10.0.0.1 port 52310 Jul 11 00:36:18.810275 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 11 00:36:18.822514 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:52310.service: Deactivated successfully. Jul 11 00:36:18.823922 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:36:18.824647 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:36:18.827288 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:52318.service - OpenSSH per-connection server daemon (10.0.0.1:52318). Jul 11 00:36:18.827882 systemd-logind[1578]: Removed session 3. Jul 11 00:36:18.876868 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 52318 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:36:18.878104 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:36:18.882468 systemd-logind[1578]: New session 4 of user core. Jul 11 00:36:18.894347 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:36:18.947322 sshd[1750]: Connection closed by 10.0.0.1 port 52318 Jul 11 00:36:18.947624 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 11 00:36:18.959739 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:52318.service: Deactivated successfully. Jul 11 00:36:18.961392 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:36:18.962177 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:36:18.964794 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:52330.service - OpenSSH per-connection server daemon (10.0.0.1:52330). Jul 11 00:36:18.965499 systemd-logind[1578]: Removed session 4. Jul 11 00:36:19.015748 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 52330 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:36:19.017164 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:36:19.021353 systemd-logind[1578]: New session 5 of user core. Jul 11 00:36:19.032383 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:36:19.089757 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:36:19.090050 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:36:19.113956 sudo[1759]: pam_unix(sudo:session): session closed for user root Jul 11 00:36:19.115325 sshd[1758]: Connection closed by 10.0.0.1 port 52330 Jul 11 00:36:19.115709 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jul 11 00:36:19.137452 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:52330.service: Deactivated successfully. Jul 11 00:36:19.139314 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:36:19.140046 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:36:19.143054 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:52342.service - OpenSSH per-connection server daemon (10.0.0.1:52342). Jul 11 00:36:19.143659 systemd-logind[1578]: Removed session 5. Jul 11 00:36:19.193878 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 52342 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:36:19.195171 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:36:19.199168 systemd-logind[1578]: New session 6 of user core. Jul 11 00:36:19.219346 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:36:19.271989 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:36:19.272305 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:36:19.758382 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 11 00:36:19.765927 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 00:36:19.766320 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:36:19.776225 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 00:36:19.832549 augenrules[1791]: No rules Jul 11 00:36:19.834191 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:36:19.834500 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 00:36:19.835836 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 11 00:36:19.837430 sshd[1767]: Connection closed by 10.0.0.1 port 52342 Jul 11 00:36:19.837820 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 11 00:36:19.856488 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:52342.service: Deactivated successfully. Jul 11 00:36:19.858442 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:36:19.859219 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:36:19.862910 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:52358.service - OpenSSH per-connection server daemon (10.0.0.1:52358). Jul 11 00:36:19.863492 systemd-logind[1578]: Removed session 6. Jul 11 00:36:19.917984 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 52358 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:36:19.919792 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:36:19.924118 systemd-logind[1578]: New session 7 of user core. Jul 11 00:36:19.933451 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:36:19.985813 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:36:19.986120 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:36:20.284286 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:36:20.297541 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:36:20.512973 dockerd[1823]: time="2025-07-11T00:36:20.512899911Z" level=info msg="Starting up" Jul 11 00:36:20.514428 dockerd[1823]: time="2025-07-11T00:36:20.514395908Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 00:36:22.713561 dockerd[1823]: time="2025-07-11T00:36:22.713501447Z" level=info msg="Loading containers: start." Jul 11 00:36:22.932296 kernel: Initializing XFRM netlink socket Jul 11 00:36:23.175435 systemd-networkd[1492]: docker0: Link UP Jul 11 00:36:23.181161 dockerd[1823]: time="2025-07-11T00:36:23.181106417Z" level=info msg="Loading containers: done." Jul 11 00:36:23.193796 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck408574413-merged.mount: Deactivated successfully. Jul 11 00:36:23.195706 dockerd[1823]: time="2025-07-11T00:36:23.195662606Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:36:23.195772 dockerd[1823]: time="2025-07-11T00:36:23.195743608Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 11 00:36:23.195869 dockerd[1823]: time="2025-07-11T00:36:23.195852843Z" level=info msg="Initializing buildkit" Jul 11 00:36:23.223900 dockerd[1823]: time="2025-07-11T00:36:23.223838025Z" level=info msg="Completed buildkit initialization" Jul 11 00:36:23.229434 dockerd[1823]: time="2025-07-11T00:36:23.229395601Z" level=info msg="Daemon has completed initialization" Jul 11 00:36:23.229575 dockerd[1823]: time="2025-07-11T00:36:23.229454392Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:36:23.229609 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:36:23.892552 containerd[1593]: time="2025-07-11T00:36:23.892502447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 00:36:24.577079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419619971.mount: Deactivated successfully. Jul 11 00:36:25.646628 containerd[1593]: time="2025-07-11T00:36:25.646563902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:25.647427 containerd[1593]: time="2025-07-11T00:36:25.647369182Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 11 00:36:25.648615 containerd[1593]: time="2025-07-11T00:36:25.648579753Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:25.651169 containerd[1593]: time="2025-07-11T00:36:25.651122083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:25.652061 containerd[1593]: time="2025-07-11T00:36:25.652023053Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.759473047s" Jul 11 00:36:25.652061 containerd[1593]: time="2025-07-11T00:36:25.652060183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 11 00:36:25.652765 containerd[1593]: time="2025-07-11T00:36:25.652729719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 00:36:26.825912 containerd[1593]: time="2025-07-11T00:36:26.825861588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:26.826810 containerd[1593]: time="2025-07-11T00:36:26.826756357Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 11 00:36:26.828258 containerd[1593]: time="2025-07-11T00:36:26.828220273Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:26.830701 containerd[1593]: time="2025-07-11T00:36:26.830670129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:26.831542 containerd[1593]: time="2025-07-11T00:36:26.831502561Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.178743998s" Jul 11 00:36:26.831588 containerd[1593]: time="2025-07-11T00:36:26.831540653Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 11 00:36:26.832052 containerd[1593]: time="2025-07-11T00:36:26.832008841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 00:36:27.272140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:36:27.273717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:36:27.485085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:27.489848 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:36:27.581070 kubelet[2102]: E0711 00:36:27.580783 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:36:27.586970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:36:27.587164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:36:27.587543 systemd[1]: kubelet.service: Consumed 218ms CPU time, 111.4M memory peak. Jul 11 00:36:29.318704 containerd[1593]: time="2025-07-11T00:36:29.318642533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:29.319454 containerd[1593]: time="2025-07-11T00:36:29.319383694Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 11 00:36:29.320511 containerd[1593]: time="2025-07-11T00:36:29.320482195Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:29.322957 containerd[1593]: time="2025-07-11T00:36:29.322900171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:29.323703 containerd[1593]: time="2025-07-11T00:36:29.323668282Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.491626379s" Jul 11 00:36:29.323736 containerd[1593]: time="2025-07-11T00:36:29.323701654Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 11 00:36:29.324199 containerd[1593]: time="2025-07-11T00:36:29.324164142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 00:36:30.616484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161602217.mount: Deactivated successfully. Jul 11 00:36:31.115545 containerd[1593]: time="2025-07-11T00:36:31.115478906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:31.116692 containerd[1593]: time="2025-07-11T00:36:31.116668108Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 11 00:36:31.117733 containerd[1593]: time="2025-07-11T00:36:31.117697368Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:31.119639 containerd[1593]: time="2025-07-11T00:36:31.119577085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:31.120057 containerd[1593]: time="2025-07-11T00:36:31.120019926Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.795828773s" Jul 11 00:36:31.120057 containerd[1593]: time="2025-07-11T00:36:31.120049080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 11 00:36:31.120518 containerd[1593]: time="2025-07-11T00:36:31.120481973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:36:32.085275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507386347.mount: Deactivated successfully. Jul 11 00:36:33.038383 containerd[1593]: time="2025-07-11T00:36:33.038312650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:33.039070 containerd[1593]: time="2025-07-11T00:36:33.039047418Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:36:33.040555 containerd[1593]: time="2025-07-11T00:36:33.040510724Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:33.044592 containerd[1593]: time="2025-07-11T00:36:33.044553358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:33.045442 containerd[1593]: time="2025-07-11T00:36:33.045416657Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.924739468s" Jul 11 00:36:33.045484 containerd[1593]: time="2025-07-11T00:36:33.045442496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:36:33.046056 containerd[1593]: time="2025-07-11T00:36:33.045849489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:36:33.519759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571308482.mount: Deactivated successfully. Jul 11 00:36:33.525468 containerd[1593]: time="2025-07-11T00:36:33.525430387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:36:33.526156 containerd[1593]: time="2025-07-11T00:36:33.526136953Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:36:33.527314 containerd[1593]: time="2025-07-11T00:36:33.527277673Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:36:33.529284 containerd[1593]: time="2025-07-11T00:36:33.529227501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:36:33.529749 containerd[1593]: time="2025-07-11T00:36:33.529694056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 483.819741ms" Jul 11 00:36:33.529749 containerd[1593]: time="2025-07-11T00:36:33.529728521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:36:33.530201 containerd[1593]: time="2025-07-11T00:36:33.530173806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 00:36:36.658815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317067605.mount: Deactivated successfully. Jul 11 00:36:37.772153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:36:37.775366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:36:37.952302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:37.956160 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:36:37.996267 kubelet[2239]: E0711 00:36:37.996094 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:36:37.999185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:36:37.999388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:36:37.999737 systemd[1]: kubelet.service: Consumed 204ms CPU time, 109.1M memory peak. Jul 11 00:36:39.097876 containerd[1593]: time="2025-07-11T00:36:39.097825709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:39.098590 containerd[1593]: time="2025-07-11T00:36:39.098543425Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 11 00:36:39.099815 containerd[1593]: time="2025-07-11T00:36:39.099750239Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:39.102319 containerd[1593]: time="2025-07-11T00:36:39.102283883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:39.103269 containerd[1593]: time="2025-07-11T00:36:39.103215170Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.573016167s" Jul 11 00:36:39.103269 containerd[1593]: time="2025-07-11T00:36:39.103266176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 11 00:36:41.228641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:41.228813 systemd[1]: kubelet.service: Consumed 204ms CPU time, 109.1M memory peak. Jul 11 00:36:41.230937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:36:41.253622 systemd[1]: Reload requested from client PID 2278 ('systemctl') (unit session-7.scope)... Jul 11 00:36:41.253637 systemd[1]: Reloading... Jul 11 00:36:41.336265 zram_generator::config[2327]: No configuration found. Jul 11 00:36:41.551618 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:36:41.667074 systemd[1]: Reloading finished in 413 ms. Jul 11 00:36:41.724887 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:36:41.724988 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:36:41.725288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:41.725327 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.3M memory peak. Jul 11 00:36:41.726834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:36:41.915289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:41.926525 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:36:41.961722 kubelet[2369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:36:41.961722 kubelet[2369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:36:41.961722 kubelet[2369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:36:41.962085 kubelet[2369]: I0711 00:36:41.961808 2369 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:36:42.165484 kubelet[2369]: I0711 00:36:42.165389 2369 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:36:42.165484 kubelet[2369]: I0711 00:36:42.165420 2369 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:36:42.165713 kubelet[2369]: I0711 00:36:42.165682 2369 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:36:42.187120 kubelet[2369]: E0711 00:36:42.187078 2369 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:42.187704 kubelet[2369]: I0711 00:36:42.187650 2369 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:36:42.194350 kubelet[2369]: I0711 00:36:42.194331 2369 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:36:42.199409 kubelet[2369]: I0711 00:36:42.199391 2369 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:36:42.201376 kubelet[2369]: I0711 00:36:42.201326 2369 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:36:42.201573 kubelet[2369]: I0711 00:36:42.201367 2369 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:36:42.201679 kubelet[2369]: I0711 00:36:42.201580 2369 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:36:42.201679 kubelet[2369]: I0711 00:36:42.201591 2369 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:36:42.201740 kubelet[2369]: I0711 00:36:42.201723 2369 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:36:42.204309 kubelet[2369]: I0711 00:36:42.204277 2369 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:36:42.204309 kubelet[2369]: I0711 00:36:42.204304 2369 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:36:42.204393 kubelet[2369]: I0711 00:36:42.204327 2369 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:36:42.204393 kubelet[2369]: I0711 00:36:42.204338 2369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:36:42.207064 kubelet[2369]: I0711 00:36:42.206581 2369 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:36:42.207064 kubelet[2369]: I0711 00:36:42.206937 2369 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:36:42.207788 kubelet[2369]: W0711 00:36:42.207758 2369 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:36:42.208634 kubelet[2369]: W0711 00:36:42.208575 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 11 00:36:42.208687 kubelet[2369]: E0711 00:36:42.208632 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:42.209204 kubelet[2369]: W0711 00:36:42.209165 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 11 00:36:42.209258 kubelet[2369]: E0711 00:36:42.209208 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:42.209880 kubelet[2369]: I0711 00:36:42.209853 2369 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:36:42.209932 kubelet[2369]: I0711 00:36:42.209897 2369 server.go:1287] "Started kubelet" Jul 11 00:36:42.212412 kubelet[2369]: I0711 00:36:42.211176 2369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:36:42.213060 kubelet[2369]: I0711 00:36:42.212526 2369 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:36:42.213060 kubelet[2369]: I0711 00:36:42.211645 2369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:36:42.213060 kubelet[2369]: I0711 00:36:42.211585 2369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:36:42.213060 kubelet[2369]: I0711 00:36:42.212827 2369 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:36:42.213060 kubelet[2369]: I0711 00:36:42.212145 2369 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:36:42.214003 kubelet[2369]: E0711 00:36:42.213592 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:42.214003 kubelet[2369]: I0711 00:36:42.213686 2369 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:36:42.214132 kubelet[2369]: I0711 00:36:42.214114 2369 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:36:42.214207 kubelet[2369]: I0711 00:36:42.214189 2369 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:36:42.214959 kubelet[2369]: E0711 00:36:42.214459 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" Jul 11 00:36:42.214959 kubelet[2369]: I0711 00:36:42.214528 2369 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:36:42.214959 kubelet[2369]: I0711 00:36:42.214592 2369 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:36:42.214959 kubelet[2369]: W0711 00:36:42.214872 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 11 00:36:42.214959 kubelet[2369]: E0711 00:36:42.214910 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:42.215728 kubelet[2369]: E0711 00:36:42.215705 2369 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:36:42.215854 kubelet[2369]: I0711 00:36:42.215828 2369 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:36:42.216483 kubelet[2369]: E0711 00:36:42.215316 2369 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510b57e837d304 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:36:42.2098665 +0000 UTC m=+0.279910934,LastTimestamp:2025-07-11 00:36:42.2098665 +0000 UTC m=+0.279910934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:36:42.227223 kubelet[2369]: I0711 00:36:42.227023 2369 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:36:42.227223 kubelet[2369]: I0711 00:36:42.227041 2369 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:36:42.227223 kubelet[2369]: I0711 00:36:42.227055 2369 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:36:42.229578 kubelet[2369]: I0711 00:36:42.229445 2369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:36:42.230647 kubelet[2369]: I0711 00:36:42.230627 2369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:36:42.230687 kubelet[2369]: I0711 00:36:42.230660 2369 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:36:42.230687 kubelet[2369]: I0711 00:36:42.230680 2369 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:36:42.230687 kubelet[2369]: I0711 00:36:42.230688 2369 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:36:42.230916 kubelet[2369]: E0711 00:36:42.230890 2369 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:36:42.314317 kubelet[2369]: E0711 00:36:42.314293 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:42.331614 kubelet[2369]: E0711 00:36:42.331581 2369 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:36:42.414929 kubelet[2369]: E0711 00:36:42.414892 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:42.415436 kubelet[2369]: E0711 00:36:42.415388 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" Jul 11 00:36:42.515717 kubelet[2369]: E0711 00:36:42.515497 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:42.531859 kubelet[2369]: E0711 00:36:42.531819 2369 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:36:42.616160 kubelet[2369]: E0711 00:36:42.616131 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:42.627734 kubelet[2369]: I0711 00:36:42.627698 2369 policy_none.go:49] "None policy: Start" Jul 11 00:36:42.627734 kubelet[2369]: I0711 00:36:42.627718 2369 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:36:42.627734 kubelet[2369]: I0711 00:36:42.627730 2369 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:36:42.627936 kubelet[2369]: W0711 00:36:42.627852 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 11 00:36:42.627965 kubelet[2369]: E0711 00:36:42.627928 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:42.633572 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:36:42.650832 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:36:42.654108 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:36:42.672095 kubelet[2369]: I0711 00:36:42.672052 2369 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:36:42.672419 kubelet[2369]: I0711 00:36:42.672256 2369 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:36:42.672419 kubelet[2369]: I0711 00:36:42.672268 2369 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:36:42.672419 kubelet[2369]: I0711 00:36:42.672419 2369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:36:42.673162 kubelet[2369]: E0711 00:36:42.673127 2369 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:36:42.673221 kubelet[2369]: E0711 00:36:42.673180 2369 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:36:42.773972 kubelet[2369]: I0711 00:36:42.773930 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:36:42.774480 kubelet[2369]: E0711 00:36:42.774452 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 11 00:36:42.815811 kubelet[2369]: E0711 00:36:42.815766 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" Jul 11 00:36:42.940027 systemd[1]: Created slice kubepods-burstable-pod7c27aa26940363c9a2ed7be1d411b7a8.slice - libcontainer container kubepods-burstable-pod7c27aa26940363c9a2ed7be1d411b7a8.slice. Jul 11 00:36:42.956064 kubelet[2369]: E0711 00:36:42.956030 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:42.958388 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 00:36:42.974429 kubelet[2369]: E0711 00:36:42.974406 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:42.975196 kubelet[2369]: I0711 00:36:42.975171 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:36:42.975716 kubelet[2369]: E0711 00:36:42.975580 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 11 00:36:42.976199 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 00:36:42.977951 kubelet[2369]: E0711 00:36:42.977922 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:43.019285 kubelet[2369]: I0711 00:36:43.019253 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:43.019336 kubelet[2369]: I0711 00:36:43.019284 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:43.019336 kubelet[2369]: I0711 00:36:43.019304 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:43.019336 kubelet[2369]: I0711 00:36:43.019320 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:43.019399 kubelet[2369]: I0711 00:36:43.019338 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c27aa26940363c9a2ed7be1d411b7a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c27aa26940363c9a2ed7be1d411b7a8\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:43.019399 kubelet[2369]: I0711 00:36:43.019353 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c27aa26940363c9a2ed7be1d411b7a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c27aa26940363c9a2ed7be1d411b7a8\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:43.019399 kubelet[2369]: I0711 00:36:43.019372 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c27aa26940363c9a2ed7be1d411b7a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c27aa26940363c9a2ed7be1d411b7a8\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:43.019471 kubelet[2369]: I0711 00:36:43.019411 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:43.019471 kubelet[2369]: I0711 00:36:43.019466 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:43.089681 kubelet[2369]: W0711 00:36:43.089596 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 11 00:36:43.089681 kubelet[2369]: E0711 00:36:43.089644 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:43.110319 kubelet[2369]: W0711 00:36:43.110290 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 11 00:36:43.110369 kubelet[2369]: E0711 00:36:43.110321 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:43.257727 containerd[1593]: time="2025-07-11T00:36:43.257691775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c27aa26940363c9a2ed7be1d411b7a8,Namespace:kube-system,Attempt:0,}" Jul 11 00:36:43.275410 containerd[1593]: time="2025-07-11T00:36:43.275370551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 00:36:43.278368 containerd[1593]: time="2025-07-11T00:36:43.278332799Z" level=info msg="connecting to shim 252df47e4b938e4f38bc51bd0350bef884f160f9c81c515587ace3982d3f23b1" address="unix:///run/containerd/s/f14014a429439923a9c5c4465da994a97776f1ae4cf9b6cc32006c7aaf1e603e" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:36:43.279155 containerd[1593]: time="2025-07-11T00:36:43.279128481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 00:36:43.301472 systemd[1]: Started cri-containerd-252df47e4b938e4f38bc51bd0350bef884f160f9c81c515587ace3982d3f23b1.scope - libcontainer container 252df47e4b938e4f38bc51bd0350bef884f160f9c81c515587ace3982d3f23b1. Jul 11 00:36:43.305962 containerd[1593]: time="2025-07-11T00:36:43.305917379Z" level=info msg="connecting to shim 38a7b08e266d547dd30ba4f6ddcad4f36c51c3e045ae22a3532fe252377e9458" address="unix:///run/containerd/s/9ff1908cfd4d7b6b3560887563342aa6f3583992e3538bbb86af3480899a8f97" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:36:43.333358 systemd[1]: Started cri-containerd-38a7b08e266d547dd30ba4f6ddcad4f36c51c3e045ae22a3532fe252377e9458.scope - libcontainer container 38a7b08e266d547dd30ba4f6ddcad4f36c51c3e045ae22a3532fe252377e9458. Jul 11 00:36:43.370678 containerd[1593]: time="2025-07-11T00:36:43.370567167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c27aa26940363c9a2ed7be1d411b7a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"252df47e4b938e4f38bc51bd0350bef884f160f9c81c515587ace3982d3f23b1\"" Jul 11 00:36:43.374206 containerd[1593]: time="2025-07-11T00:36:43.374166089Z" level=info msg="CreateContainer within sandbox \"252df47e4b938e4f38bc51bd0350bef884f160f9c81c515587ace3982d3f23b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:36:43.383466 containerd[1593]: time="2025-07-11T00:36:43.383427063Z" level=info msg="connecting to shim 3440093f0a541d38ee129fc7928aa7c4dac7796632cc413af9cf285dd2a7a498" address="unix:///run/containerd/s/c4f30220e78fadcbcb1156fa5a248d68097139933657591bb321957258e757d0" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:36:43.383576 kubelet[2369]: I0711 00:36:43.383542 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:36:43.383883 kubelet[2369]: E0711 00:36:43.383861 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Jul 11 00:36:43.386744 containerd[1593]: time="2025-07-11T00:36:43.386654688Z" level=info msg="Container 69847aed63da9285e051d0050dc34355025226763600f35bbbd6d6decfa7a646: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:36:43.387330 containerd[1593]: time="2025-07-11T00:36:43.387306772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"38a7b08e266d547dd30ba4f6ddcad4f36c51c3e045ae22a3532fe252377e9458\"" Jul 11 00:36:43.389480 containerd[1593]: time="2025-07-11T00:36:43.389447157Z" level=info msg="CreateContainer within sandbox \"38a7b08e266d547dd30ba4f6ddcad4f36c51c3e045ae22a3532fe252377e9458\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:36:43.394158 containerd[1593]: time="2025-07-11T00:36:43.394118371Z" level=info msg="CreateContainer within sandbox \"252df47e4b938e4f38bc51bd0350bef884f160f9c81c515587ace3982d3f23b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"69847aed63da9285e051d0050dc34355025226763600f35bbbd6d6decfa7a646\"" Jul 11 00:36:43.394806 containerd[1593]: time="2025-07-11T00:36:43.394772959Z" level=info msg="StartContainer for \"69847aed63da9285e051d0050dc34355025226763600f35bbbd6d6decfa7a646\"" Jul 11 00:36:43.395790 containerd[1593]: time="2025-07-11T00:36:43.395757997Z" level=info msg="connecting to shim 69847aed63da9285e051d0050dc34355025226763600f35bbbd6d6decfa7a646" address="unix:///run/containerd/s/f14014a429439923a9c5c4465da994a97776f1ae4cf9b6cc32006c7aaf1e603e" protocol=ttrpc version=3 Jul 11 00:36:43.401526 containerd[1593]: time="2025-07-11T00:36:43.401425529Z" level=info msg="Container 0f7e3da764ba1acaff0a9f4fb1e6109168ac3a1fe7aadca261b0086f2920f968: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:36:43.407374 systemd[1]: Started cri-containerd-3440093f0a541d38ee129fc7928aa7c4dac7796632cc413af9cf285dd2a7a498.scope - libcontainer container 3440093f0a541d38ee129fc7928aa7c4dac7796632cc413af9cf285dd2a7a498. Jul 11 00:36:43.410729 containerd[1593]: time="2025-07-11T00:36:43.410685381Z" level=info msg="CreateContainer within sandbox \"38a7b08e266d547dd30ba4f6ddcad4f36c51c3e045ae22a3532fe252377e9458\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f7e3da764ba1acaff0a9f4fb1e6109168ac3a1fe7aadca261b0086f2920f968\"" Jul 11 00:36:43.411285 containerd[1593]: time="2025-07-11T00:36:43.411255541Z" level=info msg="StartContainer for \"0f7e3da764ba1acaff0a9f4fb1e6109168ac3a1fe7aadca261b0086f2920f968\"" Jul 11 00:36:43.412742 systemd[1]: Started cri-containerd-69847aed63da9285e051d0050dc34355025226763600f35bbbd6d6decfa7a646.scope - libcontainer container 69847aed63da9285e051d0050dc34355025226763600f35bbbd6d6decfa7a646. Jul 11 00:36:43.412810 containerd[1593]: time="2025-07-11T00:36:43.412746689Z" level=info msg="connecting to shim 0f7e3da764ba1acaff0a9f4fb1e6109168ac3a1fe7aadca261b0086f2920f968" address="unix:///run/containerd/s/9ff1908cfd4d7b6b3560887563342aa6f3583992e3538bbb86af3480899a8f97" protocol=ttrpc version=3 Jul 11 00:36:43.435407 systemd[1]: Started cri-containerd-0f7e3da764ba1acaff0a9f4fb1e6109168ac3a1fe7aadca261b0086f2920f968.scope - libcontainer container 0f7e3da764ba1acaff0a9f4fb1e6109168ac3a1fe7aadca261b0086f2920f968. Jul 11 00:36:43.457983 containerd[1593]: time="2025-07-11T00:36:43.457910369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3440093f0a541d38ee129fc7928aa7c4dac7796632cc413af9cf285dd2a7a498\"" Jul 11 00:36:43.460310 containerd[1593]: time="2025-07-11T00:36:43.460227646Z" level=info msg="CreateContainer within sandbox \"3440093f0a541d38ee129fc7928aa7c4dac7796632cc413af9cf285dd2a7a498\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:36:43.462990 kubelet[2369]: W0711 00:36:43.462892 2369 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Jul 11 00:36:43.462990 kubelet[2369]: E0711 00:36:43.462959 2369 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:36:43.470845 containerd[1593]: time="2025-07-11T00:36:43.470817233Z" level=info msg="StartContainer for \"69847aed63da9285e051d0050dc34355025226763600f35bbbd6d6decfa7a646\" returns successfully" Jul 11 00:36:43.473675 containerd[1593]: time="2025-07-11T00:36:43.473625311Z" level=info msg="Container c623c2df260be3c2b95345bef79e700eda98a0549d35c3bb12f75ac4ec7f4fd7: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:36:43.482051 containerd[1593]: time="2025-07-11T00:36:43.482009481Z" level=info msg="CreateContainer within sandbox \"3440093f0a541d38ee129fc7928aa7c4dac7796632cc413af9cf285dd2a7a498\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c623c2df260be3c2b95345bef79e700eda98a0549d35c3bb12f75ac4ec7f4fd7\"" Jul 11 00:36:43.483692 containerd[1593]: time="2025-07-11T00:36:43.483664315Z" level=info msg="StartContainer for \"c623c2df260be3c2b95345bef79e700eda98a0549d35c3bb12f75ac4ec7f4fd7\"" Jul 11 00:36:43.485054 containerd[1593]: time="2025-07-11T00:36:43.485013025Z" level=info msg="connecting to shim c623c2df260be3c2b95345bef79e700eda98a0549d35c3bb12f75ac4ec7f4fd7" address="unix:///run/containerd/s/c4f30220e78fadcbcb1156fa5a248d68097139933657591bb321957258e757d0" protocol=ttrpc version=3 Jul 11 00:36:43.488212 containerd[1593]: time="2025-07-11T00:36:43.488024024Z" level=info msg="StartContainer for \"0f7e3da764ba1acaff0a9f4fb1e6109168ac3a1fe7aadca261b0086f2920f968\" returns successfully" Jul 11 00:36:43.506382 systemd[1]: Started cri-containerd-c623c2df260be3c2b95345bef79e700eda98a0549d35c3bb12f75ac4ec7f4fd7.scope - libcontainer container c623c2df260be3c2b95345bef79e700eda98a0549d35c3bb12f75ac4ec7f4fd7. Jul 11 00:36:43.554600 containerd[1593]: time="2025-07-11T00:36:43.554534072Z" level=info msg="StartContainer for \"c623c2df260be3c2b95345bef79e700eda98a0549d35c3bb12f75ac4ec7f4fd7\" returns successfully" Jul 11 00:36:44.185944 kubelet[2369]: I0711 00:36:44.185880 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:36:44.244279 kubelet[2369]: E0711 00:36:44.241545 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:44.247639 kubelet[2369]: E0711 00:36:44.247600 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:44.249920 kubelet[2369]: E0711 00:36:44.249886 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:44.463275 kubelet[2369]: E0711 00:36:44.462282 2369 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:36:44.549105 kubelet[2369]: I0711 00:36:44.549056 2369 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:36:44.549105 kubelet[2369]: E0711 00:36:44.549088 2369 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:36:44.558553 kubelet[2369]: E0711 00:36:44.558515 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:44.659009 kubelet[2369]: E0711 00:36:44.658956 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:44.759371 kubelet[2369]: E0711 00:36:44.759283 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:44.860224 kubelet[2369]: E0711 00:36:44.860186 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:44.961062 kubelet[2369]: E0711 00:36:44.961026 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:44.987574 kubelet[2369]: E0711 00:36:44.987431 2369 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18510b57e837d304 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:36:42.2098665 +0000 UTC m=+0.279910934,LastTimestamp:2025-07-11 00:36:42.2098665 +0000 UTC m=+0.279910934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:36:45.061518 kubelet[2369]: E0711 00:36:45.061468 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:45.161936 kubelet[2369]: E0711 00:36:45.161877 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:45.252203 kubelet[2369]: E0711 00:36:45.252124 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:45.252795 kubelet[2369]: E0711 00:36:45.252367 2369 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:36:45.262689 kubelet[2369]: E0711 00:36:45.262653 2369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:45.315420 kubelet[2369]: I0711 00:36:45.315304 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:45.320048 kubelet[2369]: E0711 00:36:45.320010 2369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:45.320048 kubelet[2369]: I0711 00:36:45.320037 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:45.321761 kubelet[2369]: E0711 00:36:45.321733 2369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:45.321761 kubelet[2369]: I0711 00:36:45.321752 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:45.323291 kubelet[2369]: E0711 00:36:45.323232 2369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:46.206211 kubelet[2369]: I0711 00:36:46.206173 2369 apiserver.go:52] "Watching apiserver" Jul 11 00:36:46.215321 kubelet[2369]: I0711 00:36:46.215288 2369 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:36:46.252464 kubelet[2369]: I0711 00:36:46.252423 2369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:46.859111 systemd[1]: Reload requested from client PID 2639 ('systemctl') (unit session-7.scope)... Jul 11 00:36:46.859126 systemd[1]: Reloading... Jul 11 00:36:46.928276 zram_generator::config[2682]: No configuration found. Jul 11 00:36:47.019839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:36:47.148853 systemd[1]: Reloading finished in 289 ms. Jul 11 00:36:47.179756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:36:47.200625 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:36:47.200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:47.201008 systemd[1]: kubelet.service: Consumed 717ms CPU time, 131.6M memory peak. Jul 11 00:36:47.202959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:36:47.401612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:36:47.405528 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:36:47.444712 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:36:47.444712 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:36:47.444712 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:36:47.445104 kubelet[2727]: I0711 00:36:47.444774 2727 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:36:47.453012 kubelet[2727]: I0711 00:36:47.452963 2727 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:36:47.453012 kubelet[2727]: I0711 00:36:47.452992 2727 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:36:47.453298 kubelet[2727]: I0711 00:36:47.453280 2727 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:36:47.454488 kubelet[2727]: I0711 00:36:47.454470 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:36:47.456688 kubelet[2727]: I0711 00:36:47.456641 2727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:36:47.461259 kubelet[2727]: I0711 00:36:47.460542 2727 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 00:36:47.465817 kubelet[2727]: I0711 00:36:47.465786 2727 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:36:47.466031 kubelet[2727]: I0711 00:36:47.465989 2727 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:36:47.468009 kubelet[2727]: I0711 00:36:47.466020 2727 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:36:47.468009 kubelet[2727]: I0711 00:36:47.467858 2727 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:36:47.468009 kubelet[2727]: I0711 00:36:47.467882 2727 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:36:47.468009 kubelet[2727]: I0711 00:36:47.467957 2727 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:36:47.468224 kubelet[2727]: I0711 00:36:47.468203 2727 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:36:47.468266 kubelet[2727]: I0711 00:36:47.468230 2727 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:36:47.468311 kubelet[2727]: I0711 00:36:47.468290 2727 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:36:47.468311 kubelet[2727]: I0711 00:36:47.468310 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:36:47.472258 kubelet[2727]: I0711 00:36:47.470096 2727 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 11 00:36:47.472258 kubelet[2727]: I0711 00:36:47.470728 2727 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:36:47.472258 kubelet[2727]: I0711 00:36:47.471388 2727 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:36:47.472258 kubelet[2727]: I0711 00:36:47.471464 2727 server.go:1287] "Started kubelet" Jul 11 00:36:47.476113 kubelet[2727]: I0711 00:36:47.476086 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:36:47.476760 kubelet[2727]: I0711 00:36:47.476742 2727 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:36:47.476999 kubelet[2727]: I0711 00:36:47.476988 2727 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:36:47.477171 kubelet[2727]: I0711 00:36:47.477161 2727 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:36:47.477628 kubelet[2727]: E0711 00:36:47.477612 2727 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:36:47.477720 kubelet[2727]: I0711 00:36:47.477702 2727 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:36:47.478724 kubelet[2727]: I0711 00:36:47.478711 2727 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:36:47.479956 kubelet[2727]: I0711 00:36:47.479923 2727 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:36:47.480100 kubelet[2727]: I0711 00:36:47.480063 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:36:47.480203 kubelet[2727]: I0711 00:36:47.480175 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:36:47.480403 kubelet[2727]: I0711 00:36:47.480383 2727 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:36:47.481007 kubelet[2727]: I0711 00:36:47.480016 2727 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:36:47.481936 kubelet[2727]: I0711 00:36:47.481910 2727 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:36:47.483951 kubelet[2727]: E0711 00:36:47.483934 2727 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:36:47.491379 kubelet[2727]: I0711 00:36:47.491323 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:36:47.493026 kubelet[2727]: I0711 00:36:47.492992 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:36:47.493078 kubelet[2727]: I0711 00:36:47.493019 2727 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:36:47.493078 kubelet[2727]: I0711 00:36:47.493070 2727 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:36:47.493078 kubelet[2727]: I0711 00:36:47.493077 2727 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:36:47.493159 kubelet[2727]: E0711 00:36:47.493137 2727 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:36:47.520805 kubelet[2727]: I0711 00:36:47.520771 2727 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:36:47.520805 kubelet[2727]: I0711 00:36:47.520789 2727 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:36:47.520805 kubelet[2727]: I0711 00:36:47.520810 2727 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:36:47.520989 kubelet[2727]: I0711 00:36:47.520951 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:36:47.520989 kubelet[2727]: I0711 00:36:47.520960 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:36:47.520989 kubelet[2727]: I0711 00:36:47.520977 2727 policy_none.go:49] "None policy: Start" Jul 11 00:36:47.520989 kubelet[2727]: I0711 00:36:47.520985 2727 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:36:47.521070 kubelet[2727]: I0711 00:36:47.520995 2727 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:36:47.521115 kubelet[2727]: I0711 00:36:47.521100 2727 state_mem.go:75] "Updated machine memory state" Jul 11 00:36:47.525103 kubelet[2727]: I0711 00:36:47.524972 2727 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:36:47.525260 kubelet[2727]: I0711 00:36:47.525150 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:36:47.525260 kubelet[2727]: I0711 00:36:47.525175 2727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:36:47.525577 kubelet[2727]: I0711 00:36:47.525548 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:36:47.526479 kubelet[2727]: E0711 00:36:47.526460 2727 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:36:47.594629 kubelet[2727]: I0711 00:36:47.594590 2727 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:47.594760 kubelet[2727]: I0711 00:36:47.594693 2727 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:47.594869 kubelet[2727]: I0711 00:36:47.594603 2727 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:47.601436 kubelet[2727]: E0711 00:36:47.601397 2727 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:47.627141 kubelet[2727]: I0711 00:36:47.627112 2727 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:36:47.634357 kubelet[2727]: I0711 00:36:47.634306 2727 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:36:47.634434 kubelet[2727]: I0711 00:36:47.634386 2727 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:36:47.777854 kubelet[2727]: I0711 00:36:47.777814 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c27aa26940363c9a2ed7be1d411b7a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c27aa26940363c9a2ed7be1d411b7a8\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:47.777854 kubelet[2727]: I0711 00:36:47.777850 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c27aa26940363c9a2ed7be1d411b7a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c27aa26940363c9a2ed7be1d411b7a8\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:47.778004 kubelet[2727]: I0711 00:36:47.777875 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c27aa26940363c9a2ed7be1d411b7a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c27aa26940363c9a2ed7be1d411b7a8\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:47.778004 kubelet[2727]: I0711 00:36:47.777909 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:47.778004 kubelet[2727]: I0711 00:36:47.777933 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:47.778004 kubelet[2727]: I0711 00:36:47.777949 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:47.778004 kubelet[2727]: I0711 00:36:47.777966 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:47.778124 kubelet[2727]: I0711 00:36:47.777983 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:36:47.778124 kubelet[2727]: I0711 00:36:47.777998 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:48.469776 kubelet[2727]: I0711 00:36:48.469723 2727 apiserver.go:52] "Watching apiserver" Jul 11 00:36:48.477127 kubelet[2727]: I0711 00:36:48.477101 2727 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:36:48.505941 kubelet[2727]: I0711 00:36:48.505795 2727 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:48.505989 kubelet[2727]: I0711 00:36:48.505964 2727 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:48.510693 kubelet[2727]: E0711 00:36:48.510674 2727 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:36:48.510925 kubelet[2727]: E0711 00:36:48.510892 2727 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:36:48.521631 kubelet[2727]: I0711 00:36:48.521566 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.521551808 podStartE2EDuration="1.521551808s" podCreationTimestamp="2025-07-11 00:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:36:48.521519265 +0000 UTC m=+1.112053768" watchObservedRunningTime="2025-07-11 00:36:48.521551808 +0000 UTC m=+1.112086311" Jul 11 00:36:48.527196 kubelet[2727]: I0711 00:36:48.527129 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.527115749 podStartE2EDuration="1.527115749s" podCreationTimestamp="2025-07-11 00:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:36:48.527104557 +0000 UTC m=+1.117639060" watchObservedRunningTime="2025-07-11 00:36:48.527115749 +0000 UTC m=+1.117650242" Jul 11 00:36:48.538349 kubelet[2727]: I0711 00:36:48.538303 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.538287023 podStartE2EDuration="2.538287023s" podCreationTimestamp="2025-07-11 00:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:36:48.532533046 +0000 UTC m=+1.123067549" watchObservedRunningTime="2025-07-11 00:36:48.538287023 +0000 UTC m=+1.128821526" Jul 11 00:36:53.765616 kubelet[2727]: I0711 00:36:53.765571 2727 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:36:53.766063 containerd[1593]: time="2025-07-11T00:36:53.765948740Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:36:53.766318 kubelet[2727]: I0711 00:36:53.766109 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:36:54.553474 systemd[1]: Created slice kubepods-besteffort-pod9a498d44_5904_4d88_abf9_739f77a0adf1.slice - libcontainer container kubepods-besteffort-pod9a498d44_5904_4d88_abf9_739f77a0adf1.slice. Jul 11 00:36:54.622052 kubelet[2727]: I0711 00:36:54.622004 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a498d44-5904-4d88-abf9-739f77a0adf1-xtables-lock\") pod \"kube-proxy-6jztp\" (UID: \"9a498d44-5904-4d88-abf9-739f77a0adf1\") " pod="kube-system/kube-proxy-6jztp" Jul 11 00:36:54.622179 kubelet[2727]: I0711 00:36:54.622053 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvd6c\" (UniqueName: \"kubernetes.io/projected/9a498d44-5904-4d88-abf9-739f77a0adf1-kube-api-access-wvd6c\") pod \"kube-proxy-6jztp\" (UID: \"9a498d44-5904-4d88-abf9-739f77a0adf1\") " pod="kube-system/kube-proxy-6jztp" Jul 11 00:36:54.622179 kubelet[2727]: I0711 00:36:54.622082 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a498d44-5904-4d88-abf9-739f77a0adf1-kube-proxy\") pod \"kube-proxy-6jztp\" (UID: \"9a498d44-5904-4d88-abf9-739f77a0adf1\") " pod="kube-system/kube-proxy-6jztp" Jul 11 00:36:54.622179 kubelet[2727]: I0711 00:36:54.622101 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a498d44-5904-4d88-abf9-739f77a0adf1-lib-modules\") pod \"kube-proxy-6jztp\" (UID: \"9a498d44-5904-4d88-abf9-739f77a0adf1\") " pod="kube-system/kube-proxy-6jztp" Jul 11 00:36:54.666278 systemd[1]: Created slice kubepods-besteffort-pode4e7e47e_aee9_4fff_b3d5_4a38643aa104.slice - libcontainer container kubepods-besteffort-pode4e7e47e_aee9_4fff_b3d5_4a38643aa104.slice. Jul 11 00:36:54.722909 kubelet[2727]: I0711 00:36:54.722738 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47czk\" (UniqueName: \"kubernetes.io/projected/e4e7e47e-aee9-4fff-b3d5-4a38643aa104-kube-api-access-47czk\") pod \"tigera-operator-747864d56d-cb7jv\" (UID: \"e4e7e47e-aee9-4fff-b3d5-4a38643aa104\") " pod="tigera-operator/tigera-operator-747864d56d-cb7jv" Jul 11 00:36:54.722909 kubelet[2727]: I0711 00:36:54.722786 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4e7e47e-aee9-4fff-b3d5-4a38643aa104-var-lib-calico\") pod \"tigera-operator-747864d56d-cb7jv\" (UID: \"e4e7e47e-aee9-4fff-b3d5-4a38643aa104\") " pod="tigera-operator/tigera-operator-747864d56d-cb7jv" Jul 11 00:36:54.864137 containerd[1593]: time="2025-07-11T00:36:54.864045768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6jztp,Uid:9a498d44-5904-4d88-abf9-739f77a0adf1,Namespace:kube-system,Attempt:0,}" Jul 11 00:36:54.883555 containerd[1593]: time="2025-07-11T00:36:54.883506845Z" level=info msg="connecting to shim 1bb40d40958e3a6e71feb71fb47d57d49b6fac9fde6d3d0291edc033f16fe79b" address="unix:///run/containerd/s/03d64f2ce2c8bdc500192b46675325edefadbf6a46047a6f3bce4cd2187fe1ff" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:36:54.918376 systemd[1]: Started cri-containerd-1bb40d40958e3a6e71feb71fb47d57d49b6fac9fde6d3d0291edc033f16fe79b.scope - libcontainer container 1bb40d40958e3a6e71feb71fb47d57d49b6fac9fde6d3d0291edc033f16fe79b. Jul 11 00:36:54.944992 containerd[1593]: time="2025-07-11T00:36:54.944945659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6jztp,Uid:9a498d44-5904-4d88-abf9-739f77a0adf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bb40d40958e3a6e71feb71fb47d57d49b6fac9fde6d3d0291edc033f16fe79b\"" Jul 11 00:36:54.948008 containerd[1593]: time="2025-07-11T00:36:54.947974623Z" level=info msg="CreateContainer within sandbox \"1bb40d40958e3a6e71feb71fb47d57d49b6fac9fde6d3d0291edc033f16fe79b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:36:54.960117 containerd[1593]: time="2025-07-11T00:36:54.958824629Z" level=info msg="Container 44dc53b377b77a1588db84b5bdcb8720d71ed332588240cc85fc2f5ca2af1c62: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:36:54.968426 containerd[1593]: time="2025-07-11T00:36:54.968389781Z" level=info msg="CreateContainer within sandbox \"1bb40d40958e3a6e71feb71fb47d57d49b6fac9fde6d3d0291edc033f16fe79b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44dc53b377b77a1588db84b5bdcb8720d71ed332588240cc85fc2f5ca2af1c62\"" Jul 11 00:36:54.969159 containerd[1593]: time="2025-07-11T00:36:54.969009585Z" level=info msg="StartContainer for \"44dc53b377b77a1588db84b5bdcb8720d71ed332588240cc85fc2f5ca2af1c62\"" Jul 11 00:36:54.970304 containerd[1593]: time="2025-07-11T00:36:54.970273939Z" level=info msg="connecting to shim 44dc53b377b77a1588db84b5bdcb8720d71ed332588240cc85fc2f5ca2af1c62" address="unix:///run/containerd/s/03d64f2ce2c8bdc500192b46675325edefadbf6a46047a6f3bce4cd2187fe1ff" protocol=ttrpc version=3 Jul 11 00:36:54.971160 containerd[1593]: time="2025-07-11T00:36:54.971117439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-cb7jv,Uid:e4e7e47e-aee9-4fff-b3d5-4a38643aa104,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:36:54.991357 containerd[1593]: time="2025-07-11T00:36:54.991313730Z" level=info msg="connecting to shim 61dcff7faf21552ee4141c4351e442c8e9f3a9e30fcf8249597dbbc20351984c" address="unix:///run/containerd/s/7d64a136afc24198e77cfdb00bf4bb442904a469eb8d5cf435335fe72b9a38fe" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:36:54.992712 systemd[1]: Started cri-containerd-44dc53b377b77a1588db84b5bdcb8720d71ed332588240cc85fc2f5ca2af1c62.scope - libcontainer container 44dc53b377b77a1588db84b5bdcb8720d71ed332588240cc85fc2f5ca2af1c62. Jul 11 00:36:55.021398 systemd[1]: Started cri-containerd-61dcff7faf21552ee4141c4351e442c8e9f3a9e30fcf8249597dbbc20351984c.scope - libcontainer container 61dcff7faf21552ee4141c4351e442c8e9f3a9e30fcf8249597dbbc20351984c. Jul 11 00:36:55.046335 containerd[1593]: time="2025-07-11T00:36:55.046271878Z" level=info msg="StartContainer for \"44dc53b377b77a1588db84b5bdcb8720d71ed332588240cc85fc2f5ca2af1c62\" returns successfully" Jul 11 00:36:55.066569 containerd[1593]: time="2025-07-11T00:36:55.066456374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-cb7jv,Uid:e4e7e47e-aee9-4fff-b3d5-4a38643aa104,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"61dcff7faf21552ee4141c4351e442c8e9f3a9e30fcf8249597dbbc20351984c\"" Jul 11 00:36:55.068706 containerd[1593]: time="2025-07-11T00:36:55.068688321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:36:55.734169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010849390.mount: Deactivated successfully. Jul 11 00:36:56.455560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672055036.mount: Deactivated successfully. Jul 11 00:36:57.419414 containerd[1593]: time="2025-07-11T00:36:57.419360591Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:57.420292 containerd[1593]: time="2025-07-11T00:36:57.420270062Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:36:57.422810 containerd[1593]: time="2025-07-11T00:36:57.422758619Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:57.425640 containerd[1593]: time="2025-07-11T00:36:57.425581081Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:36:57.426136 containerd[1593]: time="2025-07-11T00:36:57.426107664Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.357335042s" Jul 11 00:36:57.426174 containerd[1593]: time="2025-07-11T00:36:57.426136669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:36:57.428704 containerd[1593]: time="2025-07-11T00:36:57.428544793Z" level=info msg="CreateContainer within sandbox \"61dcff7faf21552ee4141c4351e442c8e9f3a9e30fcf8249597dbbc20351984c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:36:57.437735 containerd[1593]: time="2025-07-11T00:36:57.437679840Z" level=info msg="Container dd33790e3d678dbc3916d956a0d8a09363e189415362f6c37ea2dd4d8457771a: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:36:57.441610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840277464.mount: Deactivated successfully. Jul 11 00:36:57.444002 containerd[1593]: time="2025-07-11T00:36:57.443968260Z" level=info msg="CreateContainer within sandbox \"61dcff7faf21552ee4141c4351e442c8e9f3a9e30fcf8249597dbbc20351984c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd33790e3d678dbc3916d956a0d8a09363e189415362f6c37ea2dd4d8457771a\"" Jul 11 00:36:57.444438 containerd[1593]: time="2025-07-11T00:36:57.444331873Z" level=info msg="StartContainer for \"dd33790e3d678dbc3916d956a0d8a09363e189415362f6c37ea2dd4d8457771a\"" Jul 11 00:36:57.445152 containerd[1593]: time="2025-07-11T00:36:57.445115394Z" level=info msg="connecting to shim dd33790e3d678dbc3916d956a0d8a09363e189415362f6c37ea2dd4d8457771a" address="unix:///run/containerd/s/7d64a136afc24198e77cfdb00bf4bb442904a469eb8d5cf435335fe72b9a38fe" protocol=ttrpc version=3 Jul 11 00:36:57.492369 systemd[1]: Started cri-containerd-dd33790e3d678dbc3916d956a0d8a09363e189415362f6c37ea2dd4d8457771a.scope - libcontainer container dd33790e3d678dbc3916d956a0d8a09363e189415362f6c37ea2dd4d8457771a. Jul 11 00:36:57.523927 containerd[1593]: time="2025-07-11T00:36:57.523886731Z" level=info msg="StartContainer for \"dd33790e3d678dbc3916d956a0d8a09363e189415362f6c37ea2dd4d8457771a\" returns successfully" Jul 11 00:36:58.567788 kubelet[2727]: I0711 00:36:58.567512 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6jztp" podStartSLOduration=4.567491871 podStartE2EDuration="4.567491871s" podCreationTimestamp="2025-07-11 00:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:36:55.525554396 +0000 UTC m=+8.116088899" watchObservedRunningTime="2025-07-11 00:36:58.567491871 +0000 UTC m=+11.158026374" Jul 11 00:36:59.922210 update_engine[1579]: I20250711 00:36:59.922143 1579 update_attempter.cc:509] Updating boot flags... Jul 11 00:37:01.794959 kubelet[2727]: I0711 00:37:01.794691 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-cb7jv" podStartSLOduration=5.435446194 podStartE2EDuration="7.794674043s" podCreationTimestamp="2025-07-11 00:36:54 +0000 UTC" firstStartedPulling="2025-07-11 00:36:55.067828561 +0000 UTC m=+7.658363064" lastFinishedPulling="2025-07-11 00:36:57.42705641 +0000 UTC m=+10.017590913" observedRunningTime="2025-07-11 00:36:58.567797392 +0000 UTC m=+11.158331915" watchObservedRunningTime="2025-07-11 00:37:01.794674043 +0000 UTC m=+14.385208546" Jul 11 00:37:02.604080 sudo[1803]: pam_unix(sudo:session): session closed for user root Jul 11 00:37:02.606149 sshd[1802]: Connection closed by 10.0.0.1 port 52358 Jul 11 00:37:02.606666 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:02.613989 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:52358.service: Deactivated successfully. Jul 11 00:37:02.616535 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:37:02.616744 systemd[1]: session-7.scope: Consumed 3.952s CPU time, 223.3M memory peak. Jul 11 00:37:02.617932 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:37:02.619229 systemd-logind[1578]: Removed session 7. Jul 11 00:37:04.935533 systemd[1]: Created slice kubepods-besteffort-podc827450e_c7dc_4583_a0f9_49b131b37d2b.slice - libcontainer container kubepods-besteffort-podc827450e_c7dc_4583_a0f9_49b131b37d2b.slice. Jul 11 00:37:04.988615 kubelet[2727]: I0711 00:37:04.988568 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c827450e-c7dc-4583-a0f9-49b131b37d2b-tigera-ca-bundle\") pod \"calico-typha-75df764c-cwwvm\" (UID: \"c827450e-c7dc-4583-a0f9-49b131b37d2b\") " pod="calico-system/calico-typha-75df764c-cwwvm" Jul 11 00:37:04.988615 kubelet[2727]: I0711 00:37:04.988617 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk4kw\" (UniqueName: \"kubernetes.io/projected/c827450e-c7dc-4583-a0f9-49b131b37d2b-kube-api-access-hk4kw\") pod \"calico-typha-75df764c-cwwvm\" (UID: \"c827450e-c7dc-4583-a0f9-49b131b37d2b\") " pod="calico-system/calico-typha-75df764c-cwwvm" Jul 11 00:37:04.989039 kubelet[2727]: I0711 00:37:04.988637 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c827450e-c7dc-4583-a0f9-49b131b37d2b-typha-certs\") pod \"calico-typha-75df764c-cwwvm\" (UID: \"c827450e-c7dc-4583-a0f9-49b131b37d2b\") " pod="calico-system/calico-typha-75df764c-cwwvm" Jul 11 00:37:05.241073 containerd[1593]: time="2025-07-11T00:37:05.240841966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75df764c-cwwvm,Uid:c827450e-c7dc-4583-a0f9-49b131b37d2b,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:05.364457 systemd[1]: Created slice kubepods-besteffort-podfc62d4c5_f2bb_4a40_967b_3246ac297843.slice - libcontainer container kubepods-besteffort-podfc62d4c5_f2bb_4a40_967b_3246ac297843.slice. Jul 11 00:37:05.375990 containerd[1593]: time="2025-07-11T00:37:05.375933045Z" level=info msg="connecting to shim aac3b74f835540892a98dfc29e9dfd8b55d738c4a9a09aef621d5d97e676b628" address="unix:///run/containerd/s/e90e1a56a14a7c41e8b3d5d052b9da355683fef4f55dac62aea02eafa39d820c" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:05.391683 kubelet[2727]: I0711 00:37:05.391470 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-cni-log-dir\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391683 kubelet[2727]: I0711 00:37:05.391515 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-var-run-calico\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391683 kubelet[2727]: I0711 00:37:05.391531 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-lib-modules\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391683 kubelet[2727]: I0711 00:37:05.391545 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-var-lib-calico\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391683 kubelet[2727]: I0711 00:37:05.391569 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-cni-bin-dir\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391891 kubelet[2727]: I0711 00:37:05.391583 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc62d4c5-f2bb-4a40-967b-3246ac297843-tigera-ca-bundle\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391891 kubelet[2727]: I0711 00:37:05.391596 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-xtables-lock\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391891 kubelet[2727]: I0711 00:37:05.391612 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpm67\" (UniqueName: \"kubernetes.io/projected/fc62d4c5-f2bb-4a40-967b-3246ac297843-kube-api-access-dpm67\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391891 kubelet[2727]: I0711 00:37:05.391631 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc62d4c5-f2bb-4a40-967b-3246ac297843-node-certs\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.391891 kubelet[2727]: I0711 00:37:05.391645 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-policysync\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.392000 kubelet[2727]: I0711 00:37:05.391661 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-cni-net-dir\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.392000 kubelet[2727]: I0711 00:37:05.391674 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc62d4c5-f2bb-4a40-967b-3246ac297843-flexvol-driver-host\") pod \"calico-node-j2gg7\" (UID: \"fc62d4c5-f2bb-4a40-967b-3246ac297843\") " pod="calico-system/calico-node-j2gg7" Jul 11 00:37:05.401380 systemd[1]: Started cri-containerd-aac3b74f835540892a98dfc29e9dfd8b55d738c4a9a09aef621d5d97e676b628.scope - libcontainer container aac3b74f835540892a98dfc29e9dfd8b55d738c4a9a09aef621d5d97e676b628. Jul 11 00:37:05.443980 containerd[1593]: time="2025-07-11T00:37:05.443931676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75df764c-cwwvm,Uid:c827450e-c7dc-4583-a0f9-49b131b37d2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"aac3b74f835540892a98dfc29e9dfd8b55d738c4a9a09aef621d5d97e676b628\"" Jul 11 00:37:05.446252 containerd[1593]: time="2025-07-11T00:37:05.446159391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:37:05.494126 kubelet[2727]: E0711 00:37:05.493944 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.494126 kubelet[2727]: W0711 00:37:05.494026 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.494126 kubelet[2727]: E0711 00:37:05.494060 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.500276 kubelet[2727]: E0711 00:37:05.500249 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.500276 kubelet[2727]: W0711 00:37:05.500272 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.500356 kubelet[2727]: E0711 00:37:05.500288 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.500699 kubelet[2727]: E0711 00:37:05.500667 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.500822 kubelet[2727]: W0711 00:37:05.500734 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.500822 kubelet[2727]: E0711 00:37:05.500756 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.612746 kubelet[2727]: E0711 00:37:05.611969 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgrvh" podUID="3ff74661-174d-4a70-b02a-c30fd2606ef6" Jul 11 00:37:05.668816 containerd[1593]: time="2025-07-11T00:37:05.668754832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2gg7,Uid:fc62d4c5-f2bb-4a40-967b-3246ac297843,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:05.686226 kubelet[2727]: E0711 00:37:05.686193 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.686226 kubelet[2727]: W0711 00:37:05.686212 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.686471 kubelet[2727]: E0711 00:37:05.686230 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.686515 kubelet[2727]: E0711 00:37:05.686480 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.686515 kubelet[2727]: W0711 00:37:05.686488 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.686515 kubelet[2727]: E0711 00:37:05.686496 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.686708 kubelet[2727]: E0711 00:37:05.686683 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.686708 kubelet[2727]: W0711 00:37:05.686696 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.686708 kubelet[2727]: E0711 00:37:05.686706 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.686911 kubelet[2727]: E0711 00:37:05.686895 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.686911 kubelet[2727]: W0711 00:37:05.686906 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.686988 kubelet[2727]: E0711 00:37:05.686924 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.687120 kubelet[2727]: E0711 00:37:05.687109 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.687120 kubelet[2727]: W0711 00:37:05.687118 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.687186 kubelet[2727]: E0711 00:37:05.687125 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.687350 kubelet[2727]: E0711 00:37:05.687337 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.687350 kubelet[2727]: W0711 00:37:05.687346 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.687417 kubelet[2727]: E0711 00:37:05.687355 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.687596 kubelet[2727]: E0711 00:37:05.687560 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.687596 kubelet[2727]: W0711 00:37:05.687583 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.687650 kubelet[2727]: E0711 00:37:05.687608 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.687829 kubelet[2727]: E0711 00:37:05.687814 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.687829 kubelet[2727]: W0711 00:37:05.687824 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.687883 kubelet[2727]: E0711 00:37:05.687831 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.688077 kubelet[2727]: E0711 00:37:05.688055 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.688077 kubelet[2727]: W0711 00:37:05.688070 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.688136 kubelet[2727]: E0711 00:37:05.688082 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689017 kubelet[2727]: E0711 00:37:05.688284 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689017 kubelet[2727]: W0711 00:37:05.688295 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689017 kubelet[2727]: E0711 00:37:05.688303 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689017 kubelet[2727]: E0711 00:37:05.688469 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689017 kubelet[2727]: W0711 00:37:05.688476 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689017 kubelet[2727]: E0711 00:37:05.688483 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689017 kubelet[2727]: E0711 00:37:05.688688 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689017 kubelet[2727]: W0711 00:37:05.688695 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689017 kubelet[2727]: E0711 00:37:05.688703 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689017 kubelet[2727]: E0711 00:37:05.688883 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689253 kubelet[2727]: W0711 00:37:05.688892 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689253 kubelet[2727]: E0711 00:37:05.688900 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689253 kubelet[2727]: E0711 00:37:05.689061 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689253 kubelet[2727]: W0711 00:37:05.689068 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689253 kubelet[2727]: E0711 00:37:05.689075 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689374 kubelet[2727]: E0711 00:37:05.689357 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689374 kubelet[2727]: W0711 00:37:05.689369 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689420 kubelet[2727]: E0711 00:37:05.689379 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689569 kubelet[2727]: E0711 00:37:05.689557 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689569 kubelet[2727]: W0711 00:37:05.689566 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689618 kubelet[2727]: E0711 00:37:05.689584 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.689783 kubelet[2727]: E0711 00:37:05.689768 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.689783 kubelet[2727]: W0711 00:37:05.689779 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.689886 kubelet[2727]: E0711 00:37:05.689790 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.690128 kubelet[2727]: E0711 00:37:05.690114 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.690258 kubelet[2727]: W0711 00:37:05.690214 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.690258 kubelet[2727]: E0711 00:37:05.690230 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.690458 kubelet[2727]: E0711 00:37:05.690441 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.690458 kubelet[2727]: W0711 00:37:05.690452 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.690535 kubelet[2727]: E0711 00:37:05.690470 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.690674 kubelet[2727]: E0711 00:37:05.690654 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.690674 kubelet[2727]: W0711 00:37:05.690672 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.690740 kubelet[2727]: E0711 00:37:05.690683 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.692992 containerd[1593]: time="2025-07-11T00:37:05.692935623Z" level=info msg="connecting to shim abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f" address="unix:///run/containerd/s/6687f65a7fb36bd0cfc26611dbe74f8b20bf1c2759d2663d128c8ee0959b6e77" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:05.694134 kubelet[2727]: E0711 00:37:05.694110 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.694134 kubelet[2727]: W0711 00:37:05.694128 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.694224 kubelet[2727]: E0711 00:37:05.694148 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.694224 kubelet[2727]: I0711 00:37:05.694182 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhttj\" (UniqueName: \"kubernetes.io/projected/3ff74661-174d-4a70-b02a-c30fd2606ef6-kube-api-access-zhttj\") pod \"csi-node-driver-lgrvh\" (UID: \"3ff74661-174d-4a70-b02a-c30fd2606ef6\") " pod="calico-system/csi-node-driver-lgrvh" Jul 11 00:37:05.694415 kubelet[2727]: E0711 00:37:05.694390 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.694415 kubelet[2727]: W0711 00:37:05.694408 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.694510 kubelet[2727]: E0711 00:37:05.694426 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.694510 kubelet[2727]: I0711 00:37:05.694453 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ff74661-174d-4a70-b02a-c30fd2606ef6-registration-dir\") pod \"csi-node-driver-lgrvh\" (UID: \"3ff74661-174d-4a70-b02a-c30fd2606ef6\") " pod="calico-system/csi-node-driver-lgrvh" Jul 11 00:37:05.694669 kubelet[2727]: E0711 00:37:05.694651 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.694669 kubelet[2727]: W0711 00:37:05.694664 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.694729 kubelet[2727]: E0711 00:37:05.694677 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.694729 kubelet[2727]: I0711 00:37:05.694694 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ff74661-174d-4a70-b02a-c30fd2606ef6-socket-dir\") pod \"csi-node-driver-lgrvh\" (UID: \"3ff74661-174d-4a70-b02a-c30fd2606ef6\") " pod="calico-system/csi-node-driver-lgrvh" Jul 11 00:37:05.695397 kubelet[2727]: E0711 00:37:05.695334 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.695397 kubelet[2727]: W0711 00:37:05.695358 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.695397 kubelet[2727]: E0711 00:37:05.695375 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.696797 kubelet[2727]: E0711 00:37:05.696297 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.696797 kubelet[2727]: W0711 00:37:05.696314 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.696797 kubelet[2727]: E0711 00:37:05.696359 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.696797 kubelet[2727]: E0711 00:37:05.696626 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.696797 kubelet[2727]: W0711 00:37:05.696635 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.696797 kubelet[2727]: E0711 00:37:05.696684 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.696960 kubelet[2727]: E0711 00:37:05.696913 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.696960 kubelet[2727]: W0711 00:37:05.696923 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.697008 kubelet[2727]: E0711 00:37:05.696984 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.697731 kubelet[2727]: E0711 00:37:05.697283 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.697731 kubelet[2727]: W0711 00:37:05.697544 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.697731 kubelet[2727]: E0711 00:37:05.697623 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.697731 kubelet[2727]: I0711 00:37:05.697652 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3ff74661-174d-4a70-b02a-c30fd2606ef6-varrun\") pod \"csi-node-driver-lgrvh\" (UID: \"3ff74661-174d-4a70-b02a-c30fd2606ef6\") " pod="calico-system/csi-node-driver-lgrvh" Jul 11 00:37:05.697904 kubelet[2727]: E0711 00:37:05.697891 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.697964 kubelet[2727]: W0711 00:37:05.697952 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.698034 kubelet[2727]: E0711 00:37:05.698023 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.698480 kubelet[2727]: E0711 00:37:05.698412 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.698480 kubelet[2727]: W0711 00:37:05.698425 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.698721 kubelet[2727]: E0711 00:37:05.698447 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.698913 kubelet[2727]: I0711 00:37:05.698889 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ff74661-174d-4a70-b02a-c30fd2606ef6-kubelet-dir\") pod \"csi-node-driver-lgrvh\" (UID: \"3ff74661-174d-4a70-b02a-c30fd2606ef6\") " pod="calico-system/csi-node-driver-lgrvh" Jul 11 00:37:05.699284 kubelet[2727]: E0711 00:37:05.698986 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.699284 kubelet[2727]: W0711 00:37:05.699168 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.699284 kubelet[2727]: E0711 00:37:05.699184 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.699428 kubelet[2727]: E0711 00:37:05.699410 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.699428 kubelet[2727]: W0711 00:37:05.699421 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.699428 kubelet[2727]: E0711 00:37:05.699430 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.699803 kubelet[2727]: E0711 00:37:05.699788 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.699803 kubelet[2727]: W0711 00:37:05.699799 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.699865 kubelet[2727]: E0711 00:37:05.699815 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.699978 kubelet[2727]: E0711 00:37:05.699963 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.699978 kubelet[2727]: W0711 00:37:05.699973 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.700036 kubelet[2727]: E0711 00:37:05.699980 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.700201 kubelet[2727]: E0711 00:37:05.700180 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.700201 kubelet[2727]: W0711 00:37:05.700194 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.700311 kubelet[2727]: E0711 00:37:05.700208 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.729364 systemd[1]: Started cri-containerd-abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f.scope - libcontainer container abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f. Jul 11 00:37:05.756443 containerd[1593]: time="2025-07-11T00:37:05.756314752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2gg7,Uid:fc62d4c5-f2bb-4a40-967b-3246ac297843,Namespace:calico-system,Attempt:0,} returns sandbox id \"abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f\"" Jul 11 00:37:05.800941 kubelet[2727]: E0711 00:37:05.800910 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.800941 kubelet[2727]: W0711 00:37:05.800931 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.800941 kubelet[2727]: E0711 00:37:05.800950 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.801218 kubelet[2727]: E0711 00:37:05.801200 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.801218 kubelet[2727]: W0711 00:37:05.801212 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.801353 kubelet[2727]: E0711 00:37:05.801229 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.801453 kubelet[2727]: E0711 00:37:05.801438 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.801453 kubelet[2727]: W0711 00:37:05.801450 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.801516 kubelet[2727]: E0711 00:37:05.801474 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.801655 kubelet[2727]: E0711 00:37:05.801639 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.801655 kubelet[2727]: W0711 00:37:05.801649 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.801714 kubelet[2727]: E0711 00:37:05.801664 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.801841 kubelet[2727]: E0711 00:37:05.801826 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.801841 kubelet[2727]: W0711 00:37:05.801836 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.801891 kubelet[2727]: E0711 00:37:05.801849 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.802164 kubelet[2727]: E0711 00:37:05.802134 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.802164 kubelet[2727]: W0711 00:37:05.802156 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.802215 kubelet[2727]: E0711 00:37:05.802182 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.802419 kubelet[2727]: E0711 00:37:05.802395 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.802419 kubelet[2727]: W0711 00:37:05.802408 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.802481 kubelet[2727]: E0711 00:37:05.802423 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.802605 kubelet[2727]: E0711 00:37:05.802593 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.802605 kubelet[2727]: W0711 00:37:05.802602 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.802652 kubelet[2727]: E0711 00:37:05.802614 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.802814 kubelet[2727]: E0711 00:37:05.802800 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.802814 kubelet[2727]: W0711 00:37:05.802811 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.802868 kubelet[2727]: E0711 00:37:05.802827 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.803015 kubelet[2727]: E0711 00:37:05.802990 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.803015 kubelet[2727]: W0711 00:37:05.803002 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.803015 kubelet[2727]: E0711 00:37:05.803020 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.803269 kubelet[2727]: E0711 00:37:05.803162 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.803269 kubelet[2727]: W0711 00:37:05.803169 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.803269 kubelet[2727]: E0711 00:37:05.803180 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.803371 kubelet[2727]: E0711 00:37:05.803355 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.803371 kubelet[2727]: W0711 00:37:05.803367 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.803418 kubelet[2727]: E0711 00:37:05.803379 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.803535 kubelet[2727]: E0711 00:37:05.803521 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.803535 kubelet[2727]: W0711 00:37:05.803530 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.803582 kubelet[2727]: E0711 00:37:05.803541 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.803688 kubelet[2727]: E0711 00:37:05.803673 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.803688 kubelet[2727]: W0711 00:37:05.803684 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.803734 kubelet[2727]: E0711 00:37:05.803695 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.803851 kubelet[2727]: E0711 00:37:05.803837 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.803851 kubelet[2727]: W0711 00:37:05.803847 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.803897 kubelet[2727]: E0711 00:37:05.803860 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.804019 kubelet[2727]: E0711 00:37:05.804004 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.804019 kubelet[2727]: W0711 00:37:05.804018 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.804065 kubelet[2727]: E0711 00:37:05.804029 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.804171 kubelet[2727]: E0711 00:37:05.804157 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.804171 kubelet[2727]: W0711 00:37:05.804166 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.804213 kubelet[2727]: E0711 00:37:05.804176 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.804396 kubelet[2727]: E0711 00:37:05.804380 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.804396 kubelet[2727]: W0711 00:37:05.804392 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.804468 kubelet[2727]: E0711 00:37:05.804404 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.804575 kubelet[2727]: E0711 00:37:05.804560 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.804575 kubelet[2727]: W0711 00:37:05.804570 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.804616 kubelet[2727]: E0711 00:37:05.804582 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.804751 kubelet[2727]: E0711 00:37:05.804736 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.804751 kubelet[2727]: W0711 00:37:05.804745 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.804800 kubelet[2727]: E0711 00:37:05.804757 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.804909 kubelet[2727]: E0711 00:37:05.804895 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.804909 kubelet[2727]: W0711 00:37:05.804905 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.804951 kubelet[2727]: E0711 00:37:05.804915 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.805070 kubelet[2727]: E0711 00:37:05.805055 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.805070 kubelet[2727]: W0711 00:37:05.805064 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.805119 kubelet[2727]: E0711 00:37:05.805075 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.805270 kubelet[2727]: E0711 00:37:05.805224 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.805270 kubelet[2727]: W0711 00:37:05.805256 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.805270 kubelet[2727]: E0711 00:37:05.805268 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.805408 kubelet[2727]: E0711 00:37:05.805393 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.805408 kubelet[2727]: W0711 00:37:05.805403 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.805455 kubelet[2727]: E0711 00:37:05.805411 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.805604 kubelet[2727]: E0711 00:37:05.805589 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.805604 kubelet[2727]: W0711 00:37:05.805599 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.805655 kubelet[2727]: E0711 00:37:05.805607 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:05.812263 kubelet[2727]: E0711 00:37:05.812209 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:05.812263 kubelet[2727]: W0711 00:37:05.812222 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:05.812263 kubelet[2727]: E0711 00:37:05.812231 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:06.698580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141103491.mount: Deactivated successfully. Jul 11 00:37:07.369575 containerd[1593]: time="2025-07-11T00:37:07.369517720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:07.370454 containerd[1593]: time="2025-07-11T00:37:07.370415147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 00:37:07.371697 containerd[1593]: time="2025-07-11T00:37:07.371667324Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:07.373731 containerd[1593]: time="2025-07-11T00:37:07.373693865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:07.374181 containerd[1593]: time="2025-07-11T00:37:07.374140880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.927951683s" Jul 11 00:37:07.374210 containerd[1593]: time="2025-07-11T00:37:07.374178150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:37:07.375026 containerd[1593]: time="2025-07-11T00:37:07.374921235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:37:07.381915 containerd[1593]: time="2025-07-11T00:37:07.381882755Z" level=info msg="CreateContainer within sandbox \"aac3b74f835540892a98dfc29e9dfd8b55d738c4a9a09aef621d5d97e676b628\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:37:07.392091 containerd[1593]: time="2025-07-11T00:37:07.391615765Z" level=info msg="Container a35192fd1f159f370f64df1146b90b6c2dea3d2fb8cd0cb4316f0137d458ce33: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:07.399946 containerd[1593]: time="2025-07-11T00:37:07.399901639Z" level=info msg="CreateContainer within sandbox \"aac3b74f835540892a98dfc29e9dfd8b55d738c4a9a09aef621d5d97e676b628\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a35192fd1f159f370f64df1146b90b6c2dea3d2fb8cd0cb4316f0137d458ce33\"" Jul 11 00:37:07.400306 containerd[1593]: time="2025-07-11T00:37:07.400281207Z" level=info msg="StartContainer for \"a35192fd1f159f370f64df1146b90b6c2dea3d2fb8cd0cb4316f0137d458ce33\"" Jul 11 00:37:07.401424 containerd[1593]: time="2025-07-11T00:37:07.401387077Z" level=info msg="connecting to shim a35192fd1f159f370f64df1146b90b6c2dea3d2fb8cd0cb4316f0137d458ce33" address="unix:///run/containerd/s/e90e1a56a14a7c41e8b3d5d052b9da355683fef4f55dac62aea02eafa39d820c" protocol=ttrpc version=3 Jul 11 00:37:07.423380 systemd[1]: Started cri-containerd-a35192fd1f159f370f64df1146b90b6c2dea3d2fb8cd0cb4316f0137d458ce33.scope - libcontainer container a35192fd1f159f370f64df1146b90b6c2dea3d2fb8cd0cb4316f0137d458ce33. Jul 11 00:37:07.474167 containerd[1593]: time="2025-07-11T00:37:07.474131736Z" level=info msg="StartContainer for \"a35192fd1f159f370f64df1146b90b6c2dea3d2fb8cd0cb4316f0137d458ce33\" returns successfully" Jul 11 00:37:07.495544 kubelet[2727]: E0711 00:37:07.495498 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgrvh" podUID="3ff74661-174d-4a70-b02a-c30fd2606ef6" Jul 11 00:37:07.560557 kubelet[2727]: I0711 00:37:07.560478 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75df764c-cwwvm" podStartSLOduration=1.631553364 podStartE2EDuration="3.56046358s" podCreationTimestamp="2025-07-11 00:37:04 +0000 UTC" firstStartedPulling="2025-07-11 00:37:05.445876886 +0000 UTC m=+18.036411389" lastFinishedPulling="2025-07-11 00:37:07.374787102 +0000 UTC m=+19.965321605" observedRunningTime="2025-07-11 00:37:07.559386554 +0000 UTC m=+20.149921057" watchObservedRunningTime="2025-07-11 00:37:07.56046358 +0000 UTC m=+20.150998083" Jul 11 00:37:07.603887 kubelet[2727]: E0711 00:37:07.603415 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.603887 kubelet[2727]: W0711 00:37:07.603445 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.603887 kubelet[2727]: E0711 00:37:07.603462 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.603887 kubelet[2727]: E0711 00:37:07.603687 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.603887 kubelet[2727]: W0711 00:37:07.603696 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.603887 kubelet[2727]: E0711 00:37:07.603707 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.604283 kubelet[2727]: E0711 00:37:07.603955 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.604283 kubelet[2727]: W0711 00:37:07.603963 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.604283 kubelet[2727]: E0711 00:37:07.603972 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.604283 kubelet[2727]: E0711 00:37:07.604214 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.604283 kubelet[2727]: W0711 00:37:07.604222 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.604283 kubelet[2727]: E0711 00:37:07.604258 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.604502 kubelet[2727]: E0711 00:37:07.604451 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.604502 kubelet[2727]: W0711 00:37:07.604459 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.604502 kubelet[2727]: E0711 00:37:07.604466 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.604656 kubelet[2727]: E0711 00:37:07.604639 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.604656 kubelet[2727]: W0711 00:37:07.604650 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.604656 kubelet[2727]: E0711 00:37:07.604657 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.605418 kubelet[2727]: E0711 00:37:07.605256 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.605418 kubelet[2727]: W0711 00:37:07.605270 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.605418 kubelet[2727]: E0711 00:37:07.605279 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.605694 kubelet[2727]: E0711 00:37:07.605654 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.605694 kubelet[2727]: W0711 00:37:07.605669 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.605694 kubelet[2727]: E0711 00:37:07.605678 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.606375 kubelet[2727]: E0711 00:37:07.606166 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.606375 kubelet[2727]: W0711 00:37:07.606187 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.606375 kubelet[2727]: E0711 00:37:07.606196 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.607025 kubelet[2727]: E0711 00:37:07.606989 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.607025 kubelet[2727]: W0711 00:37:07.607002 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.607025 kubelet[2727]: E0711 00:37:07.607012 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.607353 kubelet[2727]: E0711 00:37:07.607333 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.607353 kubelet[2727]: W0711 00:37:07.607347 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.608252 kubelet[2727]: E0711 00:37:07.608019 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.608358 kubelet[2727]: E0711 00:37:07.608327 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.608358 kubelet[2727]: W0711 00:37:07.608347 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.608358 kubelet[2727]: E0711 00:37:07.608356 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.608766 kubelet[2727]: E0711 00:37:07.608739 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.608766 kubelet[2727]: W0711 00:37:07.608752 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.608766 kubelet[2727]: E0711 00:37:07.608761 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.608956 kubelet[2727]: E0711 00:37:07.608939 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.608956 kubelet[2727]: W0711 00:37:07.608951 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.609005 kubelet[2727]: E0711 00:37:07.608959 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.609904 kubelet[2727]: E0711 00:37:07.609554 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.609904 kubelet[2727]: W0711 00:37:07.609572 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.609904 kubelet[2727]: E0711 00:37:07.609581 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.619289 kubelet[2727]: E0711 00:37:07.619261 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.619544 kubelet[2727]: W0711 00:37:07.619409 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.619544 kubelet[2727]: E0711 00:37:07.619432 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.620789 kubelet[2727]: E0711 00:37:07.619815 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.620789 kubelet[2727]: W0711 00:37:07.619882 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.620789 kubelet[2727]: E0711 00:37:07.620720 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.621284 kubelet[2727]: E0711 00:37:07.620978 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.621284 kubelet[2727]: W0711 00:37:07.620987 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.621284 kubelet[2727]: E0711 00:37:07.621059 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.621729 kubelet[2727]: E0711 00:37:07.621700 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.621729 kubelet[2727]: W0711 00:37:07.621721 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.622106 kubelet[2727]: E0711 00:37:07.622072 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.622576 kubelet[2727]: E0711 00:37:07.622550 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.622576 kubelet[2727]: W0711 00:37:07.622566 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.622736 kubelet[2727]: E0711 00:37:07.622662 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.622868 kubelet[2727]: E0711 00:37:07.622844 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.622868 kubelet[2727]: W0711 00:37:07.622868 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.622960 kubelet[2727]: E0711 00:37:07.622915 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.623120 kubelet[2727]: E0711 00:37:07.623096 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.623120 kubelet[2727]: W0711 00:37:07.623110 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.623120 kubelet[2727]: E0711 00:37:07.623122 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.623390 kubelet[2727]: E0711 00:37:07.623343 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.623390 kubelet[2727]: W0711 00:37:07.623356 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.623390 kubelet[2727]: E0711 00:37:07.623375 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.624159 kubelet[2727]: E0711 00:37:07.623858 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.624159 kubelet[2727]: W0711 00:37:07.623871 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.624159 kubelet[2727]: E0711 00:37:07.623900 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.624159 kubelet[2727]: E0711 00:37:07.624146 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.624159 kubelet[2727]: W0711 00:37:07.624159 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.624318 kubelet[2727]: E0711 00:37:07.624175 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.624346 kubelet[2727]: E0711 00:37:07.624337 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.624370 kubelet[2727]: W0711 00:37:07.624345 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.624370 kubelet[2727]: E0711 00:37:07.624359 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.624839 kubelet[2727]: E0711 00:37:07.624810 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.624839 kubelet[2727]: W0711 00:37:07.624824 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.624839 kubelet[2727]: E0711 00:37:07.624838 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.625675 kubelet[2727]: E0711 00:37:07.625277 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.625675 kubelet[2727]: W0711 00:37:07.625296 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.625675 kubelet[2727]: E0711 00:37:07.625380 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.625675 kubelet[2727]: E0711 00:37:07.625548 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.625675 kubelet[2727]: W0711 00:37:07.625555 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.625675 kubelet[2727]: E0711 00:37:07.625606 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.625850 kubelet[2727]: E0711 00:37:07.625821 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.625850 kubelet[2727]: W0711 00:37:07.625829 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.625850 kubelet[2727]: E0711 00:37:07.625837 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.626028 kubelet[2727]: E0711 00:37:07.625999 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.626813 kubelet[2727]: W0711 00:37:07.626764 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.626865 kubelet[2727]: E0711 00:37:07.626814 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.627154 kubelet[2727]: E0711 00:37:07.627108 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.627154 kubelet[2727]: W0711 00:37:07.627146 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.627154 kubelet[2727]: E0711 00:37:07.627155 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:07.627817 kubelet[2727]: E0711 00:37:07.627789 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:07.627871 kubelet[2727]: W0711 00:37:07.627832 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:07.627871 kubelet[2727]: E0711 00:37:07.627842 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.548435 kubelet[2727]: I0711 00:37:08.548395 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:08.614854 kubelet[2727]: E0711 00:37:08.614822 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.614854 kubelet[2727]: W0711 00:37:08.614853 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.615007 kubelet[2727]: E0711 00:37:08.614875 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.615254 kubelet[2727]: E0711 00:37:08.615199 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.615254 kubelet[2727]: W0711 00:37:08.615227 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.615411 kubelet[2727]: E0711 00:37:08.615278 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.615562 kubelet[2727]: E0711 00:37:08.615547 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.615671 kubelet[2727]: W0711 00:37:08.615608 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.615671 kubelet[2727]: E0711 00:37:08.615625 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.616017 kubelet[2727]: E0711 00:37:08.615954 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.616017 kubelet[2727]: W0711 00:37:08.615965 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.616017 kubelet[2727]: E0711 00:37:08.615974 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.616277 kubelet[2727]: E0711 00:37:08.616266 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.616354 kubelet[2727]: W0711 00:37:08.616343 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.616469 kubelet[2727]: E0711 00:37:08.616429 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.616661 kubelet[2727]: E0711 00:37:08.616651 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.616803 kubelet[2727]: W0711 00:37:08.616713 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.616803 kubelet[2727]: E0711 00:37:08.616725 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.616970 kubelet[2727]: E0711 00:37:08.616960 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.617101 kubelet[2727]: W0711 00:37:08.617020 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.617101 kubelet[2727]: E0711 00:37:08.617064 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.617347 kubelet[2727]: E0711 00:37:08.617324 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.617347 kubelet[2727]: W0711 00:37:08.617333 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.617528 kubelet[2727]: E0711 00:37:08.617434 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.617646 kubelet[2727]: E0711 00:37:08.617635 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.617764 kubelet[2727]: W0711 00:37:08.617692 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.617764 kubelet[2727]: E0711 00:37:08.617728 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.618045 kubelet[2727]: E0711 00:37:08.617995 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.618045 kubelet[2727]: W0711 00:37:08.618005 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.618045 kubelet[2727]: E0711 00:37:08.618013 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.618323 kubelet[2727]: E0711 00:37:08.618312 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.618439 kubelet[2727]: W0711 00:37:08.618381 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.618439 kubelet[2727]: E0711 00:37:08.618393 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.618733 kubelet[2727]: E0711 00:37:08.618722 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.618834 kubelet[2727]: W0711 00:37:08.618781 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.618834 kubelet[2727]: E0711 00:37:08.618792 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.619108 kubelet[2727]: E0711 00:37:08.619064 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.619108 kubelet[2727]: W0711 00:37:08.619074 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.619108 kubelet[2727]: E0711 00:37:08.619082 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.619455 kubelet[2727]: E0711 00:37:08.619431 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.619455 kubelet[2727]: W0711 00:37:08.619442 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.619455 kubelet[2727]: E0711 00:37:08.619450 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.619657 kubelet[2727]: E0711 00:37:08.619641 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.619657 kubelet[2727]: W0711 00:37:08.619652 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.619700 kubelet[2727]: E0711 00:37:08.619660 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.627749 kubelet[2727]: E0711 00:37:08.627732 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.627749 kubelet[2727]: W0711 00:37:08.627746 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.627839 kubelet[2727]: E0711 00:37:08.627755 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.628161 kubelet[2727]: E0711 00:37:08.628145 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.628161 kubelet[2727]: W0711 00:37:08.628157 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.628247 kubelet[2727]: E0711 00:37:08.628180 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.628451 kubelet[2727]: E0711 00:37:08.628414 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.628451 kubelet[2727]: W0711 00:37:08.628434 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.628610 kubelet[2727]: E0711 00:37:08.628507 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.628985 kubelet[2727]: E0711 00:37:08.628968 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.628985 kubelet[2727]: W0711 00:37:08.628980 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.629058 kubelet[2727]: E0711 00:37:08.628994 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.629168 kubelet[2727]: E0711 00:37:08.629152 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.629168 kubelet[2727]: W0711 00:37:08.629162 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.629276 kubelet[2727]: E0711 00:37:08.629213 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.629367 kubelet[2727]: E0711 00:37:08.629351 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.629367 kubelet[2727]: W0711 00:37:08.629363 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.629442 kubelet[2727]: E0711 00:37:08.629411 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.629739 kubelet[2727]: E0711 00:37:08.629603 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.629739 kubelet[2727]: W0711 00:37:08.629618 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.629807 kubelet[2727]: E0711 00:37:08.629745 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.629830 kubelet[2727]: E0711 00:37:08.629811 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.629830 kubelet[2727]: W0711 00:37:08.629818 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.629830 kubelet[2727]: E0711 00:37:08.629829 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.630340 kubelet[2727]: E0711 00:37:08.630323 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.630340 kubelet[2727]: W0711 00:37:08.630334 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.630340 kubelet[2727]: E0711 00:37:08.630348 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.630651 kubelet[2727]: E0711 00:37:08.630631 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.630651 kubelet[2727]: W0711 00:37:08.630646 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.630736 kubelet[2727]: E0711 00:37:08.630664 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.630875 kubelet[2727]: E0711 00:37:08.630860 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.630875 kubelet[2727]: W0711 00:37:08.630870 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.630936 kubelet[2727]: E0711 00:37:08.630884 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.631146 kubelet[2727]: E0711 00:37:08.631127 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.631186 kubelet[2727]: W0711 00:37:08.631144 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.631280 kubelet[2727]: E0711 00:37:08.631225 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.631379 kubelet[2727]: E0711 00:37:08.631363 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.631379 kubelet[2727]: W0711 00:37:08.631374 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.631479 kubelet[2727]: E0711 00:37:08.631462 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.631701 kubelet[2727]: E0711 00:37:08.631676 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.631701 kubelet[2727]: W0711 00:37:08.631693 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.631791 kubelet[2727]: E0711 00:37:08.631708 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.631940 kubelet[2727]: E0711 00:37:08.631913 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.631940 kubelet[2727]: W0711 00:37:08.631924 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.632003 kubelet[2727]: E0711 00:37:08.631942 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.632279 kubelet[2727]: E0711 00:37:08.632256 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.632279 kubelet[2727]: W0711 00:37:08.632267 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.632365 kubelet[2727]: E0711 00:37:08.632294 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.632921 kubelet[2727]: E0711 00:37:08.632895 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.632921 kubelet[2727]: W0711 00:37:08.632908 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.632921 kubelet[2727]: E0711 00:37:08.632918 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.633145 kubelet[2727]: E0711 00:37:08.633129 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:37:08.633145 kubelet[2727]: W0711 00:37:08.633140 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:37:08.633209 kubelet[2727]: E0711 00:37:08.633149 2727 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:37:08.759744 containerd[1593]: time="2025-07-11T00:37:08.759689912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:08.760512 containerd[1593]: time="2025-07-11T00:37:08.760477150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 00:37:08.761663 containerd[1593]: time="2025-07-11T00:37:08.761603418Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:08.763403 containerd[1593]: time="2025-07-11T00:37:08.763369756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:08.763911 containerd[1593]: time="2025-07-11T00:37:08.763870532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.388925983s" Jul 11 00:37:08.763911 containerd[1593]: time="2025-07-11T00:37:08.763906470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:37:08.765786 containerd[1593]: time="2025-07-11T00:37:08.765743721Z" level=info msg="CreateContainer within sandbox \"abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:37:08.775116 containerd[1593]: time="2025-07-11T00:37:08.775052210Z" level=info msg="Container 5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:08.783729 containerd[1593]: time="2025-07-11T00:37:08.783702956Z" level=info msg="CreateContainer within sandbox \"abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592\"" Jul 11 00:37:08.784249 containerd[1593]: time="2025-07-11T00:37:08.784181791Z" level=info msg="StartContainer for \"5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592\"" Jul 11 00:37:08.785608 containerd[1593]: time="2025-07-11T00:37:08.785531551Z" level=info msg="connecting to shim 5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592" address="unix:///run/containerd/s/6687f65a7fb36bd0cfc26611dbe74f8b20bf1c2759d2663d128c8ee0959b6e77" protocol=ttrpc version=3 Jul 11 00:37:08.815361 systemd[1]: Started cri-containerd-5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592.scope - libcontainer container 5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592. Jul 11 00:37:08.857183 containerd[1593]: time="2025-07-11T00:37:08.857143885Z" level=info msg="StartContainer for \"5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592\" returns successfully" Jul 11 00:37:08.864677 systemd[1]: cri-containerd-5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592.scope: Deactivated successfully. Jul 11 00:37:08.866308 containerd[1593]: time="2025-07-11T00:37:08.866257897Z" level=info msg="received exit event container_id:\"5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592\" id:\"5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592\" pid:3462 exited_at:{seconds:1752194228 nanos:865934906}" Jul 11 00:37:08.866443 containerd[1593]: time="2025-07-11T00:37:08.866345141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592\" id:\"5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592\" pid:3462 exited_at:{seconds:1752194228 nanos:865934906}" Jul 11 00:37:08.887971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5653fccfa4fa1851310abba474b9cd43cd47d5c8652803f03b5fa2be60caf592-rootfs.mount: Deactivated successfully. Jul 11 00:37:09.494249 kubelet[2727]: E0711 00:37:09.494181 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgrvh" podUID="3ff74661-174d-4a70-b02a-c30fd2606ef6" Jul 11 00:37:09.551895 containerd[1593]: time="2025-07-11T00:37:09.551858617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:37:11.493920 kubelet[2727]: E0711 00:37:11.493877 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgrvh" podUID="3ff74661-174d-4a70-b02a-c30fd2606ef6" Jul 11 00:37:13.257469 containerd[1593]: time="2025-07-11T00:37:13.257418476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:13.258175 containerd[1593]: time="2025-07-11T00:37:13.258144715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:37:13.259230 containerd[1593]: time="2025-07-11T00:37:13.259199203Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:13.261137 containerd[1593]: time="2025-07-11T00:37:13.261095771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:13.261629 containerd[1593]: time="2025-07-11T00:37:13.261588589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.709695958s" Jul 11 00:37:13.261629 containerd[1593]: time="2025-07-11T00:37:13.261622644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:37:13.263201 containerd[1593]: time="2025-07-11T00:37:13.263162638Z" level=info msg="CreateContainer within sandbox \"abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:37:13.272301 containerd[1593]: time="2025-07-11T00:37:13.272269783Z" level=info msg="Container 4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:13.281333 containerd[1593]: time="2025-07-11T00:37:13.281303599Z" level=info msg="CreateContainer within sandbox \"abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f\"" Jul 11 00:37:13.281947 containerd[1593]: time="2025-07-11T00:37:13.281686761Z" level=info msg="StartContainer for \"4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f\"" Jul 11 00:37:13.283048 containerd[1593]: time="2025-07-11T00:37:13.283009255Z" level=info msg="connecting to shim 4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f" address="unix:///run/containerd/s/6687f65a7fb36bd0cfc26611dbe74f8b20bf1c2759d2663d128c8ee0959b6e77" protocol=ttrpc version=3 Jul 11 00:37:13.308407 systemd[1]: Started cri-containerd-4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f.scope - libcontainer container 4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f. Jul 11 00:37:13.352788 containerd[1593]: time="2025-07-11T00:37:13.352744293Z" level=info msg="StartContainer for \"4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f\" returns successfully" Jul 11 00:37:13.494404 kubelet[2727]: E0711 00:37:13.494355 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgrvh" podUID="3ff74661-174d-4a70-b02a-c30fd2606ef6" Jul 11 00:37:14.931360 containerd[1593]: time="2025-07-11T00:37:14.931302313Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:37:14.933800 systemd[1]: cri-containerd-4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f.scope: Deactivated successfully. Jul 11 00:37:14.934256 systemd[1]: cri-containerd-4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f.scope: Consumed 590ms CPU time, 176.8M memory peak, 3.2M read from disk, 171.2M written to disk. Jul 11 00:37:14.934660 containerd[1593]: time="2025-07-11T00:37:14.934635527Z" level=info msg="received exit event container_id:\"4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f\" id:\"4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f\" pid:3522 exited_at:{seconds:1752194234 nanos:934436902}" Jul 11 00:37:14.934815 containerd[1593]: time="2025-07-11T00:37:14.934742278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f\" id:\"4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f\" pid:3522 exited_at:{seconds:1752194234 nanos:934436902}" Jul 11 00:37:14.955866 kubelet[2727]: I0711 00:37:14.953883 2727 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:37:14.955806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4650bd238bd1cadbdae0b5a2ee7aab79fc95c88ae8a9e0d11dc5adeeb6becf9f-rootfs.mount: Deactivated successfully. Jul 11 00:37:14.994221 systemd[1]: Created slice kubepods-besteffort-pod79fe3b9c_5a0f_474d_b227_f6f66be0d745.slice - libcontainer container kubepods-besteffort-pod79fe3b9c_5a0f_474d_b227_f6f66be0d745.slice. Jul 11 00:37:15.003059 systemd[1]: Created slice kubepods-besteffort-pod736398d4_2e16_40ab_bc23_72a7bfe75126.slice - libcontainer container kubepods-besteffort-pod736398d4_2e16_40ab_bc23_72a7bfe75126.slice. Jul 11 00:37:15.011037 systemd[1]: Created slice kubepods-burstable-pod941673f8_2f4b_49ba_ba5f_54183525dcb0.slice - libcontainer container kubepods-burstable-pod941673f8_2f4b_49ba_ba5f_54183525dcb0.slice. Jul 11 00:37:15.016582 systemd[1]: Created slice kubepods-burstable-pod7863450b_28e4_4507_a555_78c5a4d5ec7f.slice - libcontainer container kubepods-burstable-pod7863450b_28e4_4507_a555_78c5a4d5ec7f.slice. Jul 11 00:37:15.021180 systemd[1]: Created slice kubepods-besteffort-pod25f9108d_d5b2_470f_8782_564d4a6553a2.slice - libcontainer container kubepods-besteffort-pod25f9108d_d5b2_470f_8782_564d4a6553a2.slice. Jul 11 00:37:15.026904 systemd[1]: Created slice kubepods-besteffort-pod82c1d25a_cd9d_473e_b8bb_5e0d442fa97e.slice - libcontainer container kubepods-besteffort-pod82c1d25a_cd9d_473e_b8bb_5e0d442fa97e.slice. Jul 11 00:37:15.031711 systemd[1]: Created slice kubepods-besteffort-pod9560a52b_e06c_40e5_b0fb_f63c4d7c274e.slice - libcontainer container kubepods-besteffort-pod9560a52b_e06c_40e5_b0fb_f63c4d7c274e.slice. Jul 11 00:37:15.075551 kubelet[2727]: I0711 00:37:15.075518 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c1d25a-cd9d-473e-b8bb-5e0d442fa97e-config\") pod \"goldmane-768f4c5c69-tpjr4\" (UID: \"82c1d25a-cd9d-473e-b8bb-5e0d442fa97e\") " pod="calico-system/goldmane-768f4c5c69-tpjr4" Jul 11 00:37:15.075701 kubelet[2727]: I0711 00:37:15.075555 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlgv\" (UniqueName: \"kubernetes.io/projected/82c1d25a-cd9d-473e-b8bb-5e0d442fa97e-kube-api-access-5jlgv\") pod \"goldmane-768f4c5c69-tpjr4\" (UID: \"82c1d25a-cd9d-473e-b8bb-5e0d442fa97e\") " pod="calico-system/goldmane-768f4c5c69-tpjr4" Jul 11 00:37:15.075701 kubelet[2727]: I0711 00:37:15.075576 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/25f9108d-d5b2-470f-8782-564d4a6553a2-calico-apiserver-certs\") pod \"calico-apiserver-5687cb8cd4-55jvn\" (UID: \"25f9108d-d5b2-470f-8782-564d4a6553a2\") " pod="calico-apiserver/calico-apiserver-5687cb8cd4-55jvn" Jul 11 00:37:15.075701 kubelet[2727]: I0711 00:37:15.075591 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82c1d25a-cd9d-473e-b8bb-5e0d442fa97e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-tpjr4\" (UID: \"82c1d25a-cd9d-473e-b8bb-5e0d442fa97e\") " pod="calico-system/goldmane-768f4c5c69-tpjr4" Jul 11 00:37:15.075701 kubelet[2727]: I0711 00:37:15.075608 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/736398d4-2e16-40ab-bc23-72a7bfe75126-calico-apiserver-certs\") pod \"calico-apiserver-5687cb8cd4-lfqrj\" (UID: \"736398d4-2e16-40ab-bc23-72a7bfe75126\") " pod="calico-apiserver/calico-apiserver-5687cb8cd4-lfqrj" Jul 11 00:37:15.075701 kubelet[2727]: I0711 00:37:15.075653 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/82c1d25a-cd9d-473e-b8bb-5e0d442fa97e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-tpjr4\" (UID: \"82c1d25a-cd9d-473e-b8bb-5e0d442fa97e\") " pod="calico-system/goldmane-768f4c5c69-tpjr4" Jul 11 00:37:15.075835 kubelet[2727]: I0711 00:37:15.075686 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq4f\" (UniqueName: \"kubernetes.io/projected/941673f8-2f4b-49ba-ba5f-54183525dcb0-kube-api-access-sbq4f\") pod \"coredns-668d6bf9bc-zgbdl\" (UID: \"941673f8-2f4b-49ba-ba5f-54183525dcb0\") " pod="kube-system/coredns-668d6bf9bc-zgbdl" Jul 11 00:37:15.075835 kubelet[2727]: I0711 00:37:15.075711 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46rq\" (UniqueName: \"kubernetes.io/projected/7863450b-28e4-4507-a555-78c5a4d5ec7f-kube-api-access-m46rq\") pod \"coredns-668d6bf9bc-vk287\" (UID: \"7863450b-28e4-4507-a555-78c5a4d5ec7f\") " pod="kube-system/coredns-668d6bf9bc-vk287" Jul 11 00:37:15.075835 kubelet[2727]: I0711 00:37:15.075755 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tswl\" (UniqueName: \"kubernetes.io/projected/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-kube-api-access-8tswl\") pod \"whisker-5585ccdd99-djkf8\" (UID: \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\") " pod="calico-system/whisker-5585ccdd99-djkf8" Jul 11 00:37:15.075835 kubelet[2727]: I0711 00:37:15.075785 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-backend-key-pair\") pod \"whisker-5585ccdd99-djkf8\" (UID: \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\") " pod="calico-system/whisker-5585ccdd99-djkf8" Jul 11 00:37:15.075835 kubelet[2727]: I0711 00:37:15.075801 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-ca-bundle\") pod \"whisker-5585ccdd99-djkf8\" (UID: \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\") " pod="calico-system/whisker-5585ccdd99-djkf8" Jul 11 00:37:15.075953 kubelet[2727]: I0711 00:37:15.075819 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79fe3b9c-5a0f-474d-b227-f6f66be0d745-tigera-ca-bundle\") pod \"calico-kube-controllers-858bcfbfbd-kkd59\" (UID: \"79fe3b9c-5a0f-474d-b227-f6f66be0d745\") " pod="calico-system/calico-kube-controllers-858bcfbfbd-kkd59" Jul 11 00:37:15.075953 kubelet[2727]: I0711 00:37:15.075834 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7863450b-28e4-4507-a555-78c5a4d5ec7f-config-volume\") pod \"coredns-668d6bf9bc-vk287\" (UID: \"7863450b-28e4-4507-a555-78c5a4d5ec7f\") " pod="kube-system/coredns-668d6bf9bc-vk287" Jul 11 00:37:15.075953 kubelet[2727]: I0711 00:37:15.075852 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbdw\" (UniqueName: \"kubernetes.io/projected/25f9108d-d5b2-470f-8782-564d4a6553a2-kube-api-access-8pbdw\") pod \"calico-apiserver-5687cb8cd4-55jvn\" (UID: \"25f9108d-d5b2-470f-8782-564d4a6553a2\") " pod="calico-apiserver/calico-apiserver-5687cb8cd4-55jvn" Jul 11 00:37:15.075953 kubelet[2727]: I0711 00:37:15.075883 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qds\" (UniqueName: \"kubernetes.io/projected/79fe3b9c-5a0f-474d-b227-f6f66be0d745-kube-api-access-s5qds\") pod \"calico-kube-controllers-858bcfbfbd-kkd59\" (UID: \"79fe3b9c-5a0f-474d-b227-f6f66be0d745\") " pod="calico-system/calico-kube-controllers-858bcfbfbd-kkd59" Jul 11 00:37:15.075953 kubelet[2727]: I0711 00:37:15.075913 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvjt\" (UniqueName: \"kubernetes.io/projected/736398d4-2e16-40ab-bc23-72a7bfe75126-kube-api-access-chvjt\") pod \"calico-apiserver-5687cb8cd4-lfqrj\" (UID: \"736398d4-2e16-40ab-bc23-72a7bfe75126\") " pod="calico-apiserver/calico-apiserver-5687cb8cd4-lfqrj" Jul 11 00:37:15.076067 kubelet[2727]: I0711 00:37:15.075927 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/941673f8-2f4b-49ba-ba5f-54183525dcb0-config-volume\") pod \"coredns-668d6bf9bc-zgbdl\" (UID: \"941673f8-2f4b-49ba-ba5f-54183525dcb0\") " pod="kube-system/coredns-668d6bf9bc-zgbdl" Jul 11 00:37:15.302358 containerd[1593]: time="2025-07-11T00:37:15.302319958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bcfbfbd-kkd59,Uid:79fe3b9c-5a0f-474d-b227-f6f66be0d745,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:15.311399 containerd[1593]: time="2025-07-11T00:37:15.311333685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-lfqrj,Uid:736398d4-2e16-40ab-bc23-72a7bfe75126,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:37:15.314844 containerd[1593]: time="2025-07-11T00:37:15.314810689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgbdl,Uid:941673f8-2f4b-49ba-ba5f-54183525dcb0,Namespace:kube-system,Attempt:0,}" Jul 11 00:37:15.320380 containerd[1593]: time="2025-07-11T00:37:15.320342804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk287,Uid:7863450b-28e4-4507-a555-78c5a4d5ec7f,Namespace:kube-system,Attempt:0,}" Jul 11 00:37:15.324094 containerd[1593]: time="2025-07-11T00:37:15.324033460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-55jvn,Uid:25f9108d-d5b2-470f-8782-564d4a6553a2,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:37:15.329873 containerd[1593]: time="2025-07-11T00:37:15.329764270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tpjr4,Uid:82c1d25a-cd9d-473e-b8bb-5e0d442fa97e,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:15.335186 containerd[1593]: time="2025-07-11T00:37:15.335143028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5585ccdd99-djkf8,Uid:9560a52b-e06c-40e5-b0fb-f63c4d7c274e,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:15.445633 containerd[1593]: time="2025-07-11T00:37:15.445414570Z" level=error msg="Failed to destroy network for sandbox \"fe5bfa49edf104a0b4d6559627ea82ddf2fc51ec96c029cf4a310ace4d03142c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.445633 containerd[1593]: time="2025-07-11T00:37:15.445417054Z" level=error msg="Failed to destroy network for sandbox \"78dfbad8d9d60d0cc83e04d870139e04d38353fb20ed243be4b0214ae458a1a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.446443 containerd[1593]: time="2025-07-11T00:37:15.446385489Z" level=error msg="Failed to destroy network for sandbox \"40979b5cb5bf28e3262757a88ebf9abb57224472c84ae4255baefb961cfd4bcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.448404 containerd[1593]: time="2025-07-11T00:37:15.448375099Z" level=error msg="Failed to destroy network for sandbox \"f01bd460ace00efb2f639aa996829ffa09ee368c84fc0fdc3891f62fd5a82b24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.451145 containerd[1593]: time="2025-07-11T00:37:15.451123339Z" level=error msg="Failed to destroy network for sandbox \"37667cd9a7befa5db7db4a0332f387745359285f569c964771594086abb7fbda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.491496 containerd[1593]: time="2025-07-11T00:37:15.491420434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgbdl,Uid:941673f8-2f4b-49ba-ba5f-54183525dcb0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40979b5cb5bf28e3262757a88ebf9abb57224472c84ae4255baefb961cfd4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.492286 containerd[1593]: time="2025-07-11T00:37:15.491428289Z" level=error msg="Failed to destroy network for sandbox \"4d28672687bc9336843f18c22c5d3d1473dad4f206e8e6cea72fdea2bd496a9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.492286 containerd[1593]: time="2025-07-11T00:37:15.491440662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk287,Uid:7863450b-28e4-4507-a555-78c5a4d5ec7f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01bd460ace00efb2f639aa996829ffa09ee368c84fc0fdc3891f62fd5a82b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.492286 containerd[1593]: time="2025-07-11T00:37:15.491450100Z" level=error msg="Failed to destroy network for sandbox \"4844428f421c988f4d5d6166d04b5e5cc225d0bf68aa94933c7838acab14b488\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.492286 containerd[1593]: time="2025-07-11T00:37:15.491451723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-lfqrj,Uid:736398d4-2e16-40ab-bc23-72a7bfe75126,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37667cd9a7befa5db7db4a0332f387745359285f569c964771594086abb7fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.492488 containerd[1593]: time="2025-07-11T00:37:15.491457203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5585ccdd99-djkf8,Uid:9560a52b-e06c-40e5-b0fb-f63c4d7c274e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78dfbad8d9d60d0cc83e04d870139e04d38353fb20ed243be4b0214ae458a1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.492835 containerd[1593]: time="2025-07-11T00:37:15.492796718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bcfbfbd-kkd59,Uid:79fe3b9c-5a0f-474d-b227-f6f66be0d745,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe5bfa49edf104a0b4d6559627ea82ddf2fc51ec96c029cf4a310ace4d03142c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.493328 containerd[1593]: time="2025-07-11T00:37:15.493262456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-55jvn,Uid:25f9108d-d5b2-470f-8782-564d4a6553a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d28672687bc9336843f18c22c5d3d1473dad4f206e8e6cea72fdea2bd496a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.494799 containerd[1593]: time="2025-07-11T00:37:15.494699514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tpjr4,Uid:82c1d25a-cd9d-473e-b8bb-5e0d442fa97e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4844428f421c988f4d5d6166d04b5e5cc225d0bf68aa94933c7838acab14b488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.499607 systemd[1]: Created slice kubepods-besteffort-pod3ff74661_174d_4a70_b02a_c30fd2606ef6.slice - libcontainer container kubepods-besteffort-pod3ff74661_174d_4a70_b02a_c30fd2606ef6.slice. Jul 11 00:37:15.501945 containerd[1593]: time="2025-07-11T00:37:15.501921174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgrvh,Uid:3ff74661-174d-4a70-b02a-c30fd2606ef6,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:15.502846 kubelet[2727]: E0711 00:37:15.502810 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01bd460ace00efb2f639aa996829ffa09ee368c84fc0fdc3891f62fd5a82b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.503111 kubelet[2727]: E0711 00:37:15.503037 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01bd460ace00efb2f639aa996829ffa09ee368c84fc0fdc3891f62fd5a82b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vk287" Jul 11 00:37:15.503111 kubelet[2727]: E0711 00:37:15.503062 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01bd460ace00efb2f639aa996829ffa09ee368c84fc0fdc3891f62fd5a82b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vk287" Jul 11 00:37:15.503111 kubelet[2727]: E0711 00:37:15.502862 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe5bfa49edf104a0b4d6559627ea82ddf2fc51ec96c029cf4a310ace4d03142c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.503308 kubelet[2727]: E0711 00:37:15.503099 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vk287_kube-system(7863450b-28e4-4507-a555-78c5a4d5ec7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vk287_kube-system(7863450b-28e4-4507-a555-78c5a4d5ec7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f01bd460ace00efb2f639aa996829ffa09ee368c84fc0fdc3891f62fd5a82b24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vk287" podUID="7863450b-28e4-4507-a555-78c5a4d5ec7f" Jul 11 00:37:15.503308 kubelet[2727]: E0711 00:37:15.502900 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4844428f421c988f4d5d6166d04b5e5cc225d0bf68aa94933c7838acab14b488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.503308 kubelet[2727]: E0711 00:37:15.503134 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4844428f421c988f4d5d6166d04b5e5cc225d0bf68aa94933c7838acab14b488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tpjr4" Jul 11 00:37:15.503506 kubelet[2727]: E0711 00:37:15.503135 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe5bfa49edf104a0b4d6559627ea82ddf2fc51ec96c029cf4a310ace4d03142c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-858bcfbfbd-kkd59" Jul 11 00:37:15.503506 kubelet[2727]: E0711 00:37:15.503162 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe5bfa49edf104a0b4d6559627ea82ddf2fc51ec96c029cf4a310ace4d03142c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-858bcfbfbd-kkd59" Jul 11 00:37:15.503506 kubelet[2727]: E0711 00:37:15.503198 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-858bcfbfbd-kkd59_calico-system(79fe3b9c-5a0f-474d-b227-f6f66be0d745)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-858bcfbfbd-kkd59_calico-system(79fe3b9c-5a0f-474d-b227-f6f66be0d745)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe5bfa49edf104a0b4d6559627ea82ddf2fc51ec96c029cf4a310ace4d03142c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-858bcfbfbd-kkd59" podUID="79fe3b9c-5a0f-474d-b227-f6f66be0d745" Jul 11 00:37:15.503616 kubelet[2727]: E0711 00:37:15.502875 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37667cd9a7befa5db7db4a0332f387745359285f569c964771594086abb7fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.503616 kubelet[2727]: E0711 00:37:15.503250 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37667cd9a7befa5db7db4a0332f387745359285f569c964771594086abb7fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5687cb8cd4-lfqrj" Jul 11 00:37:15.503616 kubelet[2727]: E0711 00:37:15.503264 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37667cd9a7befa5db7db4a0332f387745359285f569c964771594086abb7fbda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5687cb8cd4-lfqrj" Jul 11 00:37:15.503685 kubelet[2727]: E0711 00:37:15.503286 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5687cb8cd4-lfqrj_calico-apiserver(736398d4-2e16-40ab-bc23-72a7bfe75126)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5687cb8cd4-lfqrj_calico-apiserver(736398d4-2e16-40ab-bc23-72a7bfe75126)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37667cd9a7befa5db7db4a0332f387745359285f569c964771594086abb7fbda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5687cb8cd4-lfqrj" podUID="736398d4-2e16-40ab-bc23-72a7bfe75126" Jul 11 00:37:15.503685 kubelet[2727]: E0711 00:37:15.502808 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40979b5cb5bf28e3262757a88ebf9abb57224472c84ae4255baefb961cfd4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.503685 kubelet[2727]: E0711 00:37:15.503320 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40979b5cb5bf28e3262757a88ebf9abb57224472c84ae4255baefb961cfd4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zgbdl" Jul 11 00:37:15.503775 kubelet[2727]: E0711 00:37:15.503334 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40979b5cb5bf28e3262757a88ebf9abb57224472c84ae4255baefb961cfd4bcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zgbdl" Jul 11 00:37:15.503775 kubelet[2727]: E0711 00:37:15.503355 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zgbdl_kube-system(941673f8-2f4b-49ba-ba5f-54183525dcb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zgbdl_kube-system(941673f8-2f4b-49ba-ba5f-54183525dcb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40979b5cb5bf28e3262757a88ebf9abb57224472c84ae4255baefb961cfd4bcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zgbdl" podUID="941673f8-2f4b-49ba-ba5f-54183525dcb0" Jul 11 00:37:15.503775 kubelet[2727]: E0711 00:37:15.502885 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78dfbad8d9d60d0cc83e04d870139e04d38353fb20ed243be4b0214ae458a1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.503882 kubelet[2727]: E0711 00:37:15.503376 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78dfbad8d9d60d0cc83e04d870139e04d38353fb20ed243be4b0214ae458a1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5585ccdd99-djkf8" Jul 11 00:37:15.503882 kubelet[2727]: E0711 00:37:15.503387 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78dfbad8d9d60d0cc83e04d870139e04d38353fb20ed243be4b0214ae458a1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5585ccdd99-djkf8" Jul 11 00:37:15.503882 kubelet[2727]: E0711 00:37:15.503405 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5585ccdd99-djkf8_calico-system(9560a52b-e06c-40e5-b0fb-f63c4d7c274e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5585ccdd99-djkf8_calico-system(9560a52b-e06c-40e5-b0fb-f63c4d7c274e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78dfbad8d9d60d0cc83e04d870139e04d38353fb20ed243be4b0214ae458a1a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5585ccdd99-djkf8" podUID="9560a52b-e06c-40e5-b0fb-f63c4d7c274e" Jul 11 00:37:15.503983 kubelet[2727]: E0711 00:37:15.503145 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4844428f421c988f4d5d6166d04b5e5cc225d0bf68aa94933c7838acab14b488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tpjr4" Jul 11 00:37:15.503983 kubelet[2727]: E0711 00:37:15.503434 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-tpjr4_calico-system(82c1d25a-cd9d-473e-b8bb-5e0d442fa97e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-tpjr4_calico-system(82c1d25a-cd9d-473e-b8bb-5e0d442fa97e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4844428f421c988f4d5d6166d04b5e5cc225d0bf68aa94933c7838acab14b488\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-tpjr4" podUID="82c1d25a-cd9d-473e-b8bb-5e0d442fa97e" Jul 11 00:37:15.503983 kubelet[2727]: E0711 00:37:15.502926 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d28672687bc9336843f18c22c5d3d1473dad4f206e8e6cea72fdea2bd496a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.504082 kubelet[2727]: E0711 00:37:15.503465 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d28672687bc9336843f18c22c5d3d1473dad4f206e8e6cea72fdea2bd496a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5687cb8cd4-55jvn" Jul 11 00:37:15.504082 kubelet[2727]: E0711 00:37:15.503476 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d28672687bc9336843f18c22c5d3d1473dad4f206e8e6cea72fdea2bd496a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5687cb8cd4-55jvn" Jul 11 00:37:15.504082 kubelet[2727]: E0711 00:37:15.503496 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5687cb8cd4-55jvn_calico-apiserver(25f9108d-d5b2-470f-8782-564d4a6553a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5687cb8cd4-55jvn_calico-apiserver(25f9108d-d5b2-470f-8782-564d4a6553a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d28672687bc9336843f18c22c5d3d1473dad4f206e8e6cea72fdea2bd496a9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5687cb8cd4-55jvn" podUID="25f9108d-d5b2-470f-8782-564d4a6553a2" Jul 11 00:37:15.552859 containerd[1593]: time="2025-07-11T00:37:15.552727796Z" level=error msg="Failed to destroy network for sandbox \"21dc1f144fae3b4165ed812293434b40b93cc70bbe12fd8565f57290cd11d0c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.554032 containerd[1593]: time="2025-07-11T00:37:15.553989144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgrvh,Uid:3ff74661-174d-4a70-b02a-c30fd2606ef6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dc1f144fae3b4165ed812293434b40b93cc70bbe12fd8565f57290cd11d0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.554221 kubelet[2727]: E0711 00:37:15.554180 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dc1f144fae3b4165ed812293434b40b93cc70bbe12fd8565f57290cd11d0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:37:15.554369 kubelet[2727]: E0711 00:37:15.554250 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dc1f144fae3b4165ed812293434b40b93cc70bbe12fd8565f57290cd11d0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgrvh" Jul 11 00:37:15.554369 kubelet[2727]: E0711 00:37:15.554271 2727 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dc1f144fae3b4165ed812293434b40b93cc70bbe12fd8565f57290cd11d0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgrvh" Jul 11 00:37:15.554369 kubelet[2727]: E0711 00:37:15.554324 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgrvh_calico-system(3ff74661-174d-4a70-b02a-c30fd2606ef6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgrvh_calico-system(3ff74661-174d-4a70-b02a-c30fd2606ef6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21dc1f144fae3b4165ed812293434b40b93cc70bbe12fd8565f57290cd11d0c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgrvh" podUID="3ff74661-174d-4a70-b02a-c30fd2606ef6" Jul 11 00:37:15.569054 containerd[1593]: time="2025-07-11T00:37:15.569004532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:37:20.686616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191511961.mount: Deactivated successfully. Jul 11 00:37:22.737118 containerd[1593]: time="2025-07-11T00:37:22.737065814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:22.737960 containerd[1593]: time="2025-07-11T00:37:22.737898420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:37:22.739333 containerd[1593]: time="2025-07-11T00:37:22.739302652Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:22.741476 containerd[1593]: time="2025-07-11T00:37:22.741440324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:22.741965 containerd[1593]: time="2025-07-11T00:37:22.741927430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.172882242s" Jul 11 00:37:22.741994 containerd[1593]: time="2025-07-11T00:37:22.741965352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:37:22.749701 containerd[1593]: time="2025-07-11T00:37:22.749643568Z" level=info msg="CreateContainer within sandbox \"abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:37:22.771117 containerd[1593]: time="2025-07-11T00:37:22.771068550Z" level=info msg="Container 3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:22.797989 containerd[1593]: time="2025-07-11T00:37:22.797947984Z" level=info msg="CreateContainer within sandbox \"abba2b1a63646bca328516e9f5761cfadb4a27bf9d46d1239220627420ae1e4f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac\"" Jul 11 00:37:22.798451 containerd[1593]: time="2025-07-11T00:37:22.798396238Z" level=info msg="StartContainer for \"3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac\"" Jul 11 00:37:22.806476 containerd[1593]: time="2025-07-11T00:37:22.806435765Z" level=info msg="connecting to shim 3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac" address="unix:///run/containerd/s/6687f65a7fb36bd0cfc26611dbe74f8b20bf1c2759d2663d128c8ee0959b6e77" protocol=ttrpc version=3 Jul 11 00:37:22.827438 systemd[1]: Started cri-containerd-3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac.scope - libcontainer container 3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac. Jul 11 00:37:22.871095 containerd[1593]: time="2025-07-11T00:37:22.871057342Z" level=info msg="StartContainer for \"3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac\" returns successfully" Jul 11 00:37:22.901603 kubelet[2727]: I0711 00:37:22.901521 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j2gg7" podStartSLOduration=0.918918626 podStartE2EDuration="17.901496514s" podCreationTimestamp="2025-07-11 00:37:05 +0000 UTC" firstStartedPulling="2025-07-11 00:37:05.760084385 +0000 UTC m=+18.350618878" lastFinishedPulling="2025-07-11 00:37:22.742662263 +0000 UTC m=+35.333196766" observedRunningTime="2025-07-11 00:37:22.901110588 +0000 UTC m=+35.491645091" watchObservedRunningTime="2025-07-11 00:37:22.901496514 +0000 UTC m=+35.492031017" Jul 11 00:37:22.953815 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:37:22.953921 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:37:23.120633 kubelet[2727]: I0711 00:37:23.120593 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tswl\" (UniqueName: \"kubernetes.io/projected/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-kube-api-access-8tswl\") pod \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\" (UID: \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\") " Jul 11 00:37:23.120633 kubelet[2727]: I0711 00:37:23.120633 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-backend-key-pair\") pod \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\" (UID: \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\") " Jul 11 00:37:23.120633 kubelet[2727]: I0711 00:37:23.120653 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-ca-bundle\") pod \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\" (UID: \"9560a52b-e06c-40e5-b0fb-f63c4d7c274e\") " Jul 11 00:37:23.121142 kubelet[2727]: I0711 00:37:23.121099 2727 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9560a52b-e06c-40e5-b0fb-f63c4d7c274e" (UID: "9560a52b-e06c-40e5-b0fb-f63c4d7c274e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:37:23.124426 kubelet[2727]: I0711 00:37:23.124346 2727 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-kube-api-access-8tswl" (OuterVolumeSpecName: "kube-api-access-8tswl") pod "9560a52b-e06c-40e5-b0fb-f63c4d7c274e" (UID: "9560a52b-e06c-40e5-b0fb-f63c4d7c274e"). InnerVolumeSpecName "kube-api-access-8tswl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:37:23.124807 kubelet[2727]: I0711 00:37:23.124784 2727 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9560a52b-e06c-40e5-b0fb-f63c4d7c274e" (UID: "9560a52b-e06c-40e5-b0fb-f63c4d7c274e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:37:23.221934 kubelet[2727]: I0711 00:37:23.221888 2727 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8tswl\" (UniqueName: \"kubernetes.io/projected/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-kube-api-access-8tswl\") on node \"localhost\" DevicePath \"\"" Jul 11 00:37:23.221934 kubelet[2727]: I0711 00:37:23.221915 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:37:23.221934 kubelet[2727]: I0711 00:37:23.221925 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9560a52b-e06c-40e5-b0fb-f63c4d7c274e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:37:23.501530 systemd[1]: Removed slice kubepods-besteffort-pod9560a52b_e06c_40e5_b0fb_f63c4d7c274e.slice - libcontainer container kubepods-besteffort-pod9560a52b_e06c_40e5_b0fb_f63c4d7c274e.slice. Jul 11 00:37:23.747959 systemd[1]: var-lib-kubelet-pods-9560a52b\x2de06c\x2d40e5\x2db0fb\x2df63c4d7c274e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tswl.mount: Deactivated successfully. Jul 11 00:37:23.748080 systemd[1]: var-lib-kubelet-pods-9560a52b\x2de06c\x2d40e5\x2db0fb\x2df63c4d7c274e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:37:24.007275 systemd[1]: Created slice kubepods-besteffort-pod8692a1a1_44a7_4ef5_a4c9_e753091645f4.slice - libcontainer container kubepods-besteffort-pod8692a1a1_44a7_4ef5_a4c9_e753091645f4.slice. Jul 11 00:37:24.026832 kubelet[2727]: I0711 00:37:24.026797 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8692a1a1-44a7-4ef5-a4c9-e753091645f4-whisker-backend-key-pair\") pod \"whisker-6d494686d9-qwxdt\" (UID: \"8692a1a1-44a7-4ef5-a4c9-e753091645f4\") " pod="calico-system/whisker-6d494686d9-qwxdt" Jul 11 00:37:24.026832 kubelet[2727]: I0711 00:37:24.026831 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7f8b\" (UniqueName: \"kubernetes.io/projected/8692a1a1-44a7-4ef5-a4c9-e753091645f4-kube-api-access-q7f8b\") pod \"whisker-6d494686d9-qwxdt\" (UID: \"8692a1a1-44a7-4ef5-a4c9-e753091645f4\") " pod="calico-system/whisker-6d494686d9-qwxdt" Jul 11 00:37:24.027227 kubelet[2727]: I0711 00:37:24.026855 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8692a1a1-44a7-4ef5-a4c9-e753091645f4-whisker-ca-bundle\") pod \"whisker-6d494686d9-qwxdt\" (UID: \"8692a1a1-44a7-4ef5-a4c9-e753091645f4\") " pod="calico-system/whisker-6d494686d9-qwxdt" Jul 11 00:37:24.312469 containerd[1593]: time="2025-07-11T00:37:24.312424034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d494686d9-qwxdt,Uid:8692a1a1-44a7-4ef5-a4c9-e753091645f4,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:24.475524 systemd-networkd[1492]: calid73eccd7d65: Link UP Jul 11 00:37:24.476015 systemd-networkd[1492]: calid73eccd7d65: Gained carrier Jul 11 00:37:24.487972 containerd[1593]: 2025-07-11 00:37:24.359 [INFO][4003] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:24.487972 containerd[1593]: 2025-07-11 00:37:24.377 [INFO][4003] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d494686d9--qwxdt-eth0 whisker-6d494686d9- calico-system 8692a1a1-44a7-4ef5-a4c9-e753091645f4 886 0 2025-07-11 00:37:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d494686d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d494686d9-qwxdt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid73eccd7d65 [] [] }} ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-" Jul 11 00:37:24.487972 containerd[1593]: 2025-07-11 00:37:24.377 [INFO][4003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" Jul 11 00:37:24.487972 containerd[1593]: 2025-07-11 00:37:24.436 [INFO][4018] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" HandleID="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Workload="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.437 [INFO][4018] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" HandleID="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Workload="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bee30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d494686d9-qwxdt", "timestamp":"2025-07-11 00:37:24.436424462 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.437 [INFO][4018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.437 [INFO][4018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.437 [INFO][4018] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.444 [INFO][4018] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" host="localhost" Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.450 [INFO][4018] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.454 [INFO][4018] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.456 [INFO][4018] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.457 [INFO][4018] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:24.488211 containerd[1593]: 2025-07-11 00:37:24.457 [INFO][4018] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" host="localhost" Jul 11 00:37:24.488485 containerd[1593]: 2025-07-11 00:37:24.458 [INFO][4018] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33 Jul 11 00:37:24.488485 containerd[1593]: 2025-07-11 00:37:24.461 [INFO][4018] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" host="localhost" Jul 11 00:37:24.488485 containerd[1593]: 2025-07-11 00:37:24.465 [INFO][4018] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" host="localhost" Jul 11 00:37:24.488485 containerd[1593]: 2025-07-11 00:37:24.465 [INFO][4018] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" host="localhost" Jul 11 00:37:24.488485 containerd[1593]: 2025-07-11 00:37:24.465 [INFO][4018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:24.488485 containerd[1593]: 2025-07-11 00:37:24.465 [INFO][4018] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" HandleID="k8s-pod-network.4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Workload="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" Jul 11 00:37:24.488615 containerd[1593]: 2025-07-11 00:37:24.468 [INFO][4003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d494686d9--qwxdt-eth0", GenerateName:"whisker-6d494686d9-", Namespace:"calico-system", SelfLink:"", UID:"8692a1a1-44a7-4ef5-a4c9-e753091645f4", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d494686d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d494686d9-qwxdt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid73eccd7d65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:24.488615 containerd[1593]: 2025-07-11 00:37:24.468 [INFO][4003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" Jul 11 00:37:24.488688 containerd[1593]: 2025-07-11 00:37:24.468 [INFO][4003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid73eccd7d65 ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" Jul 11 00:37:24.488688 containerd[1593]: 2025-07-11 00:37:24.476 [INFO][4003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" Jul 11 00:37:24.488737 containerd[1593]: 2025-07-11 00:37:24.476 [INFO][4003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d494686d9--qwxdt-eth0", GenerateName:"whisker-6d494686d9-", Namespace:"calico-system", SelfLink:"", UID:"8692a1a1-44a7-4ef5-a4c9-e753091645f4", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d494686d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33", Pod:"whisker-6d494686d9-qwxdt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid73eccd7d65", MAC:"96:1e:c1:6d:33:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:24.488787 containerd[1593]: 2025-07-11 00:37:24.484 [INFO][4003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" Namespace="calico-system" Pod="whisker-6d494686d9-qwxdt" WorkloadEndpoint="localhost-k8s-whisker--6d494686d9--qwxdt-eth0" Jul 11 00:37:24.663069 containerd[1593]: time="2025-07-11T00:37:24.662952661Z" level=info msg="connecting to shim 4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33" address="unix:///run/containerd/s/5d8b210b85679868be1953f657b3f8de3d982c6c623ef23be0c0d24943423b13" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:24.699365 systemd[1]: Started cri-containerd-4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33.scope - libcontainer container 4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33. Jul 11 00:37:24.711631 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:24.744940 containerd[1593]: time="2025-07-11T00:37:24.744907356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d494686d9-qwxdt,Uid:8692a1a1-44a7-4ef5-a4c9-e753091645f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33\"" Jul 11 00:37:24.746484 containerd[1593]: time="2025-07-11T00:37:24.746247878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:37:25.499657 kubelet[2727]: I0711 00:37:25.499508 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9560a52b-e06c-40e5-b0fb-f63c4d7c274e" path="/var/lib/kubelet/pods/9560a52b-e06c-40e5-b0fb-f63c4d7c274e/volumes" Jul 11 00:37:25.620409 systemd-networkd[1492]: calid73eccd7d65: Gained IPv6LL Jul 11 00:37:25.883258 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:46352.service - OpenSSH per-connection server daemon (10.0.0.1:46352). Jul 11 00:37:25.938141 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 46352 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:25.939500 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:25.943836 systemd-logind[1578]: New session 8 of user core. Jul 11 00:37:25.953378 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:37:26.079037 sshd[4110]: Connection closed by 10.0.0.1 port 46352 Jul 11 00:37:26.079337 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:26.083771 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:46352.service: Deactivated successfully. Jul 11 00:37:26.085819 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:37:26.086621 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:37:26.087755 systemd-logind[1578]: Removed session 8. Jul 11 00:37:26.238926 containerd[1593]: time="2025-07-11T00:37:26.238818052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:26.239872 containerd[1593]: time="2025-07-11T00:37:26.239841106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:37:26.241007 containerd[1593]: time="2025-07-11T00:37:26.240976791Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:26.242928 containerd[1593]: time="2025-07-11T00:37:26.242903273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:26.243523 containerd[1593]: time="2025-07-11T00:37:26.243472745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.497188719s" Jul 11 00:37:26.243523 containerd[1593]: time="2025-07-11T00:37:26.243507560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:37:26.248413 containerd[1593]: time="2025-07-11T00:37:26.248375002Z" level=info msg="CreateContainer within sandbox \"4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:37:26.255859 containerd[1593]: time="2025-07-11T00:37:26.255826273Z" level=info msg="Container 2000697cc8ede97ad48c458056ca37a096aa58e37dc571f2da04ab4e8c16d0ed: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:26.263373 containerd[1593]: time="2025-07-11T00:37:26.263340442Z" level=info msg="CreateContainer within sandbox \"4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2000697cc8ede97ad48c458056ca37a096aa58e37dc571f2da04ab4e8c16d0ed\"" Jul 11 00:37:26.263802 containerd[1593]: time="2025-07-11T00:37:26.263750552Z" level=info msg="StartContainer for \"2000697cc8ede97ad48c458056ca37a096aa58e37dc571f2da04ab4e8c16d0ed\"" Jul 11 00:37:26.264716 containerd[1593]: time="2025-07-11T00:37:26.264669590Z" level=info msg="connecting to shim 2000697cc8ede97ad48c458056ca37a096aa58e37dc571f2da04ab4e8c16d0ed" address="unix:///run/containerd/s/5d8b210b85679868be1953f657b3f8de3d982c6c623ef23be0c0d24943423b13" protocol=ttrpc version=3 Jul 11 00:37:26.285370 systemd[1]: Started cri-containerd-2000697cc8ede97ad48c458056ca37a096aa58e37dc571f2da04ab4e8c16d0ed.scope - libcontainer container 2000697cc8ede97ad48c458056ca37a096aa58e37dc571f2da04ab4e8c16d0ed. Jul 11 00:37:26.328498 containerd[1593]: time="2025-07-11T00:37:26.328459473Z" level=info msg="StartContainer for \"2000697cc8ede97ad48c458056ca37a096aa58e37dc571f2da04ab4e8c16d0ed\" returns successfully" Jul 11 00:37:26.329812 containerd[1593]: time="2025-07-11T00:37:26.329785687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:37:26.495767 containerd[1593]: time="2025-07-11T00:37:26.495430046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-55jvn,Uid:25f9108d-d5b2-470f-8782-564d4a6553a2,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:37:26.495767 containerd[1593]: time="2025-07-11T00:37:26.495569578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk287,Uid:7863450b-28e4-4507-a555-78c5a4d5ec7f,Namespace:kube-system,Attempt:0,}" Jul 11 00:37:26.622312 systemd-networkd[1492]: calie92c31f9f67: Link UP Jul 11 00:37:26.622968 systemd-networkd[1492]: calie92c31f9f67: Gained carrier Jul 11 00:37:26.636429 containerd[1593]: 2025-07-11 00:37:26.554 [INFO][4188] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:26.636429 containerd[1593]: 2025-07-11 00:37:26.565 [INFO][4188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0 calico-apiserver-5687cb8cd4- calico-apiserver 25f9108d-d5b2-470f-8782-564d4a6553a2 818 0 2025-07-11 00:37:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5687cb8cd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5687cb8cd4-55jvn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie92c31f9f67 [] [] }} ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-" Jul 11 00:37:26.636429 containerd[1593]: 2025-07-11 00:37:26.565 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" Jul 11 00:37:26.636429 containerd[1593]: 2025-07-11 00:37:26.590 [INFO][4215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" HandleID="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Workload="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.590 [INFO][4215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" HandleID="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Workload="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5687cb8cd4-55jvn", "timestamp":"2025-07-11 00:37:26.590261796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.590 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.590 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.590 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.596 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" host="localhost" Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.599 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.603 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.605 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.607 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:26.636643 containerd[1593]: 2025-07-11 00:37:26.607 [INFO][4215] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" host="localhost" Jul 11 00:37:26.636849 containerd[1593]: 2025-07-11 00:37:26.608 [INFO][4215] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af Jul 11 00:37:26.636849 containerd[1593]: 2025-07-11 00:37:26.613 [INFO][4215] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" host="localhost" Jul 11 00:37:26.636849 containerd[1593]: 2025-07-11 00:37:26.617 [INFO][4215] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" host="localhost" Jul 11 00:37:26.636849 containerd[1593]: 2025-07-11 00:37:26.617 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" host="localhost" Jul 11 00:37:26.636849 containerd[1593]: 2025-07-11 00:37:26.617 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:26.636849 containerd[1593]: 2025-07-11 00:37:26.617 [INFO][4215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" HandleID="k8s-pod-network.6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Workload="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" Jul 11 00:37:26.636973 containerd[1593]: 2025-07-11 00:37:26.619 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0", GenerateName:"calico-apiserver-5687cb8cd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"25f9108d-d5b2-470f-8782-564d4a6553a2", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5687cb8cd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5687cb8cd4-55jvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie92c31f9f67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:26.637024 containerd[1593]: 2025-07-11 00:37:26.619 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" Jul 11 00:37:26.637024 containerd[1593]: 2025-07-11 00:37:26.619 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie92c31f9f67 ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" Jul 11 00:37:26.637024 containerd[1593]: 2025-07-11 00:37:26.623 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" Jul 11 00:37:26.637098 containerd[1593]: 2025-07-11 00:37:26.623 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0", GenerateName:"calico-apiserver-5687cb8cd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"25f9108d-d5b2-470f-8782-564d4a6553a2", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5687cb8cd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af", Pod:"calico-apiserver-5687cb8cd4-55jvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie92c31f9f67", MAC:"86:16:71:5b:2c:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:26.637206 containerd[1593]: 2025-07-11 00:37:26.630 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-55jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--55jvn-eth0" Jul 11 00:37:26.668380 containerd[1593]: time="2025-07-11T00:37:26.668334105Z" level=info msg="connecting to shim 6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af" address="unix:///run/containerd/s/01c4f0a905b70bfde182d6fa2abf42ddf1d857a3cb926a4fec001d557af470af" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:26.697387 systemd[1]: Started cri-containerd-6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af.scope - libcontainer container 6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af. Jul 11 00:37:26.712820 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:26.724844 systemd-networkd[1492]: cali2e914a08c2f: Link UP Jul 11 00:37:26.725539 systemd-networkd[1492]: cali2e914a08c2f: Gained carrier Jul 11 00:37:26.741995 containerd[1593]: 2025-07-11 00:37:26.558 [INFO][4198] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:26.741995 containerd[1593]: 2025-07-11 00:37:26.568 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vk287-eth0 coredns-668d6bf9bc- kube-system 7863450b-28e4-4507-a555-78c5a4d5ec7f 815 0 2025-07-11 00:36:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vk287 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2e914a08c2f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-" Jul 11 00:37:26.741995 containerd[1593]: 2025-07-11 00:37:26.568 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" Jul 11 00:37:26.741995 containerd[1593]: 2025-07-11 00:37:26.592 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" HandleID="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Workload="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.593 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" HandleID="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Workload="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001357a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vk287", "timestamp":"2025-07-11 00:37:26.592944972 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.593 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.617 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.617 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.697 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" host="localhost" Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.702 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.706 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.707 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.709 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:26.742213 containerd[1593]: 2025-07-11 00:37:26.709 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" host="localhost" Jul 11 00:37:26.742536 containerd[1593]: 2025-07-11 00:37:26.710 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632 Jul 11 00:37:26.742536 containerd[1593]: 2025-07-11 00:37:26.714 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" host="localhost" Jul 11 00:37:26.742536 containerd[1593]: 2025-07-11 00:37:26.719 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" host="localhost" Jul 11 00:37:26.742536 containerd[1593]: 2025-07-11 00:37:26.719 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" host="localhost" Jul 11 00:37:26.742536 containerd[1593]: 2025-07-11 00:37:26.719 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:26.742536 containerd[1593]: 2025-07-11 00:37:26.719 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" HandleID="k8s-pod-network.a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Workload="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" Jul 11 00:37:26.742657 containerd[1593]: 2025-07-11 00:37:26.722 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vk287-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7863450b-28e4-4507-a555-78c5a4d5ec7f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vk287", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e914a08c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:26.742727 containerd[1593]: 2025-07-11 00:37:26.722 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" Jul 11 00:37:26.742727 containerd[1593]: 2025-07-11 00:37:26.722 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e914a08c2f ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" Jul 11 00:37:26.742727 containerd[1593]: 2025-07-11 00:37:26.724 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" Jul 11 00:37:26.742793 containerd[1593]: 2025-07-11 00:37:26.724 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vk287-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7863450b-28e4-4507-a555-78c5a4d5ec7f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632", Pod:"coredns-668d6bf9bc-vk287", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e914a08c2f", MAC:"ca:a8:44:2c:76:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:26.742793 containerd[1593]: 2025-07-11 00:37:26.736 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" Namespace="kube-system" Pod="coredns-668d6bf9bc-vk287" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vk287-eth0" Jul 11 00:37:26.750533 containerd[1593]: time="2025-07-11T00:37:26.750456278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-55jvn,Uid:25f9108d-d5b2-470f-8782-564d4a6553a2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af\"" Jul 11 00:37:26.765470 containerd[1593]: time="2025-07-11T00:37:26.765411498Z" level=info msg="connecting to shim a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632" address="unix:///run/containerd/s/1b736ed2fc83abc89b1ba3eeef4dd07fec1893ca066918848eaf839c075b2672" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:26.792442 systemd[1]: Started cri-containerd-a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632.scope - libcontainer container a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632. Jul 11 00:37:26.804727 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:26.838408 containerd[1593]: time="2025-07-11T00:37:26.838358329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vk287,Uid:7863450b-28e4-4507-a555-78c5a4d5ec7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632\"" Jul 11 00:37:26.842020 containerd[1593]: time="2025-07-11T00:37:26.841982814Z" level=info msg="CreateContainer within sandbox \"a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:37:26.852002 containerd[1593]: time="2025-07-11T00:37:26.851958300Z" level=info msg="Container 0a0f0187ee812e73e37080141e169047007c0b32d57b63589c08803247714ff8: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:26.858724 containerd[1593]: time="2025-07-11T00:37:26.858681882Z" level=info msg="CreateContainer within sandbox \"a82e7b6039f67e9ea0a9720ca531d1ede9c820ec480397344832b2396dfb0632\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a0f0187ee812e73e37080141e169047007c0b32d57b63589c08803247714ff8\"" Jul 11 00:37:26.859454 containerd[1593]: time="2025-07-11T00:37:26.859264859Z" level=info msg="StartContainer for \"0a0f0187ee812e73e37080141e169047007c0b32d57b63589c08803247714ff8\"" Jul 11 00:37:26.860483 containerd[1593]: time="2025-07-11T00:37:26.860447322Z" level=info msg="connecting to shim 0a0f0187ee812e73e37080141e169047007c0b32d57b63589c08803247714ff8" address="unix:///run/containerd/s/1b736ed2fc83abc89b1ba3eeef4dd07fec1893ca066918848eaf839c075b2672" protocol=ttrpc version=3 Jul 11 00:37:26.886422 systemd[1]: Started cri-containerd-0a0f0187ee812e73e37080141e169047007c0b32d57b63589c08803247714ff8.scope - libcontainer container 0a0f0187ee812e73e37080141e169047007c0b32d57b63589c08803247714ff8. Jul 11 00:37:27.116665 containerd[1593]: time="2025-07-11T00:37:27.116611083Z" level=info msg="StartContainer for \"0a0f0187ee812e73e37080141e169047007c0b32d57b63589c08803247714ff8\" returns successfully" Jul 11 00:37:27.924414 systemd-networkd[1492]: cali2e914a08c2f: Gained IPv6LL Jul 11 00:37:27.958392 kubelet[2727]: I0711 00:37:27.957995 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vk287" podStartSLOduration=33.957980054 podStartE2EDuration="33.957980054s" podCreationTimestamp="2025-07-11 00:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:37:27.957345662 +0000 UTC m=+40.547880165" watchObservedRunningTime="2025-07-11 00:37:27.957980054 +0000 UTC m=+40.548514557" Jul 11 00:37:28.245498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779310897.mount: Deactivated successfully. Jul 11 00:37:28.344348 containerd[1593]: time="2025-07-11T00:37:28.344296623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:28.345093 containerd[1593]: time="2025-07-11T00:37:28.345059968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:37:28.346148 containerd[1593]: time="2025-07-11T00:37:28.346105373Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:28.348024 containerd[1593]: time="2025-07-11T00:37:28.347983964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:28.348574 containerd[1593]: time="2025-07-11T00:37:28.348536092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.018725558s" Jul 11 00:37:28.348574 containerd[1593]: time="2025-07-11T00:37:28.348570767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:37:28.349686 containerd[1593]: time="2025-07-11T00:37:28.349510984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:37:28.350743 containerd[1593]: time="2025-07-11T00:37:28.350710429Z" level=info msg="CreateContainer within sandbox \"4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:37:28.359971 containerd[1593]: time="2025-07-11T00:37:28.359938806Z" level=info msg="Container 2e5a178749ab17177d06d5a976f4e3e4d45eecd1c424589ed5b653fe5c939024: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:28.369314 containerd[1593]: time="2025-07-11T00:37:28.369272682Z" level=info msg="CreateContainer within sandbox \"4bf473a76fd1c1230681987b1c2192a281b90abc6d34646d159cbc5b0c589e33\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2e5a178749ab17177d06d5a976f4e3e4d45eecd1c424589ed5b653fe5c939024\"" Jul 11 00:37:28.369786 containerd[1593]: time="2025-07-11T00:37:28.369701157Z" level=info msg="StartContainer for \"2e5a178749ab17177d06d5a976f4e3e4d45eecd1c424589ed5b653fe5c939024\"" Jul 11 00:37:28.370683 containerd[1593]: time="2025-07-11T00:37:28.370648989Z" level=info msg="connecting to shim 2e5a178749ab17177d06d5a976f4e3e4d45eecd1c424589ed5b653fe5c939024" address="unix:///run/containerd/s/5d8b210b85679868be1953f657b3f8de3d982c6c623ef23be0c0d24943423b13" protocol=ttrpc version=3 Jul 11 00:37:28.392383 systemd[1]: Started cri-containerd-2e5a178749ab17177d06d5a976f4e3e4d45eecd1c424589ed5b653fe5c939024.scope - libcontainer container 2e5a178749ab17177d06d5a976f4e3e4d45eecd1c424589ed5b653fe5c939024. Jul 11 00:37:28.439620 containerd[1593]: time="2025-07-11T00:37:28.439587537Z" level=info msg="StartContainer for \"2e5a178749ab17177d06d5a976f4e3e4d45eecd1c424589ed5b653fe5c939024\" returns successfully" Jul 11 00:37:28.494824 containerd[1593]: time="2025-07-11T00:37:28.494790296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgbdl,Uid:941673f8-2f4b-49ba-ba5f-54183525dcb0,Namespace:kube-system,Attempt:0,}" Jul 11 00:37:28.500653 systemd-networkd[1492]: calie92c31f9f67: Gained IPv6LL Jul 11 00:37:28.559955 kubelet[2727]: I0711 00:37:28.559922 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:28.598422 systemd-networkd[1492]: cali0e644d2fb64: Link UP Jul 11 00:37:28.598599 systemd-networkd[1492]: cali0e644d2fb64: Gained carrier Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.520 [INFO][4444] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.529 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0 coredns-668d6bf9bc- kube-system 941673f8-2f4b-49ba-ba5f-54183525dcb0 819 0 2025-07-11 00:36:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-zgbdl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e644d2fb64 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.529 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.556 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" HandleID="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Workload="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.556 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" HandleID="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Workload="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139840), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-zgbdl", "timestamp":"2025-07-11 00:37:28.556611026 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.557 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.557 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.557 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.564 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.568 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.572 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.573 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.575 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.575 [INFO][4459] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.578 [INFO][4459] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1 Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.582 [INFO][4459] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.589 [INFO][4459] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.589 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" host="localhost" Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.589 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:28.618117 containerd[1593]: 2025-07-11 00:37:28.589 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" HandleID="k8s-pod-network.50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Workload="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" Jul 11 00:37:28.619055 containerd[1593]: 2025-07-11 00:37:28.593 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"941673f8-2f4b-49ba-ba5f-54183525dcb0", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-zgbdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e644d2fb64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:28.619055 containerd[1593]: 2025-07-11 00:37:28.594 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" Jul 11 00:37:28.619055 containerd[1593]: 2025-07-11 00:37:28.594 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e644d2fb64 ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" Jul 11 00:37:28.619055 containerd[1593]: 2025-07-11 00:37:28.598 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" Jul 11 00:37:28.619055 containerd[1593]: 2025-07-11 00:37:28.599 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"941673f8-2f4b-49ba-ba5f-54183525dcb0", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 36, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1", Pod:"coredns-668d6bf9bc-zgbdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e644d2fb64", MAC:"72:54:60:6f:4f:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:28.619055 containerd[1593]: 2025-07-11 00:37:28.607 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgbdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zgbdl-eth0" Jul 11 00:37:28.674207 containerd[1593]: time="2025-07-11T00:37:28.674143621Z" level=info msg="connecting to shim 50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1" address="unix:///run/containerd/s/38fc8ebfb20a2a15cfd434640179e4b715fd17ab73b229bab854e0da90378d41" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:28.725376 systemd[1]: Started cri-containerd-50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1.scope - libcontainer container 50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1. Jul 11 00:37:28.737084 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:28.875363 containerd[1593]: time="2025-07-11T00:37:28.875325427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgbdl,Uid:941673f8-2f4b-49ba-ba5f-54183525dcb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1\"" Jul 11 00:37:28.878222 containerd[1593]: time="2025-07-11T00:37:28.878169774Z" level=info msg="CreateContainer within sandbox \"50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:37:28.896798 containerd[1593]: time="2025-07-11T00:37:28.896764207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac\" id:\"abb693aa2da6dfdeb5cf381e5329e0321d84fcdb9981826a5157706f8229b40e\" pid:4509 exit_status:1 exited_at:{seconds:1752194248 nanos:896445377}" Jul 11 00:37:28.921890 containerd[1593]: time="2025-07-11T00:37:28.921846667Z" level=info msg="Container a674ec0ac084265767f6b6f87e924767559d9926694caabca7d1edabec6edf67: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:28.931258 kubelet[2727]: I0711 00:37:28.929833 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d494686d9-qwxdt" podStartSLOduration=2.326503406 podStartE2EDuration="5.929816407s" podCreationTimestamp="2025-07-11 00:37:23 +0000 UTC" firstStartedPulling="2025-07-11 00:37:24.746049144 +0000 UTC m=+37.336583647" lastFinishedPulling="2025-07-11 00:37:28.349362145 +0000 UTC m=+40.939896648" observedRunningTime="2025-07-11 00:37:28.929103478 +0000 UTC m=+41.519637981" watchObservedRunningTime="2025-07-11 00:37:28.929816407 +0000 UTC m=+41.520350910" Jul 11 00:37:28.931426 containerd[1593]: time="2025-07-11T00:37:28.929971610Z" level=info msg="CreateContainer within sandbox \"50db061acbc21ab4c622b6adf7c9bd8555ec534780d84fcc9686b45f55312fd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a674ec0ac084265767f6b6f87e924767559d9926694caabca7d1edabec6edf67\"" Jul 11 00:37:28.933167 containerd[1593]: time="2025-07-11T00:37:28.931984603Z" level=info msg="StartContainer for \"a674ec0ac084265767f6b6f87e924767559d9926694caabca7d1edabec6edf67\"" Jul 11 00:37:28.933218 containerd[1593]: time="2025-07-11T00:37:28.933203234Z" level=info msg="connecting to shim a674ec0ac084265767f6b6f87e924767559d9926694caabca7d1edabec6edf67" address="unix:///run/containerd/s/38fc8ebfb20a2a15cfd434640179e4b715fd17ab73b229bab854e0da90378d41" protocol=ttrpc version=3 Jul 11 00:37:28.967376 systemd[1]: Started cri-containerd-a674ec0ac084265767f6b6f87e924767559d9926694caabca7d1edabec6edf67.scope - libcontainer container a674ec0ac084265767f6b6f87e924767559d9926694caabca7d1edabec6edf67. Jul 11 00:37:28.996884 containerd[1593]: time="2025-07-11T00:37:28.996834416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac\" id:\"3382430e97e4a307abbca9cf7a75498544f955026a83a508200eb072f6e9ca63\" pid:4584 exit_status:1 exited_at:{seconds:1752194248 nanos:996546716}" Jul 11 00:37:29.001523 containerd[1593]: time="2025-07-11T00:37:29.001440304Z" level=info msg="StartContainer for \"a674ec0ac084265767f6b6f87e924767559d9926694caabca7d1edabec6edf67\" returns successfully" Jul 11 00:37:29.494879 containerd[1593]: time="2025-07-11T00:37:29.494823409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgrvh,Uid:3ff74661-174d-4a70-b02a-c30fd2606ef6,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:29.495278 containerd[1593]: time="2025-07-11T00:37:29.494921293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-lfqrj,Uid:736398d4-2e16-40ab-bc23-72a7bfe75126,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:37:29.582091 systemd-networkd[1492]: cali435b8418f14: Link UP Jul 11 00:37:29.582762 systemd-networkd[1492]: cali435b8418f14: Gained carrier Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.518 [INFO][4635] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.527 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lgrvh-eth0 csi-node-driver- calico-system 3ff74661-174d-4a70-b02a-c30fd2606ef6 716 0 2025-07-11 00:37:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lgrvh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali435b8418f14 [] [] }} ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.527 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-eth0" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.551 [INFO][4663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" HandleID="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Workload="localhost-k8s-csi--node--driver--lgrvh-eth0" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.552 [INFO][4663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" HandleID="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Workload="localhost-k8s-csi--node--driver--lgrvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lgrvh", "timestamp":"2025-07-11 00:37:29.551923601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.552 [INFO][4663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.552 [INFO][4663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.552 [INFO][4663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.557 [INFO][4663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.560 [INFO][4663] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.564 [INFO][4663] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.566 [INFO][4663] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.568 [INFO][4663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.568 [INFO][4663] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.569 [INFO][4663] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55 Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.572 [INFO][4663] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.577 [INFO][4663] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.577 [INFO][4663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" host="localhost" Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.577 [INFO][4663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:29.593090 containerd[1593]: 2025-07-11 00:37:29.577 [INFO][4663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" HandleID="k8s-pod-network.78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Workload="localhost-k8s-csi--node--driver--lgrvh-eth0" Jul 11 00:37:29.593724 containerd[1593]: 2025-07-11 00:37:29.579 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lgrvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ff74661-174d-4a70-b02a-c30fd2606ef6", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lgrvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali435b8418f14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:29.593724 containerd[1593]: 2025-07-11 00:37:29.580 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-eth0" Jul 11 00:37:29.593724 containerd[1593]: 2025-07-11 00:37:29.580 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali435b8418f14 ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-eth0" Jul 11 00:37:29.593724 containerd[1593]: 2025-07-11 00:37:29.582 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-eth0" Jul 11 00:37:29.593724 containerd[1593]: 2025-07-11 00:37:29.582 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lgrvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ff74661-174d-4a70-b02a-c30fd2606ef6", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55", Pod:"csi-node-driver-lgrvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali435b8418f14", MAC:"b6:ee:68:92:2d:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:29.593724 containerd[1593]: 2025-07-11 00:37:29.590 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" Namespace="calico-system" Pod="csi-node-driver-lgrvh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgrvh-eth0" Jul 11 00:37:29.621255 containerd[1593]: time="2025-07-11T00:37:29.621144645Z" level=info msg="connecting to shim 78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55" address="unix:///run/containerd/s/ae0b248159d0643eb270c711846a5849dcd99d91ed27e811760cbc345283966f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:29.649373 systemd[1]: Started cri-containerd-78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55.scope - libcontainer container 78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55. Jul 11 00:37:29.663026 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:29.680804 containerd[1593]: time="2025-07-11T00:37:29.680769532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgrvh,Uid:3ff74661-174d-4a70-b02a-c30fd2606ef6,Namespace:calico-system,Attempt:0,} returns sandbox id \"78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55\"" Jul 11 00:37:29.694037 systemd-networkd[1492]: calif5eafbccd09: Link UP Jul 11 00:37:29.695163 systemd-networkd[1492]: calif5eafbccd09: Gained carrier Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.527 [INFO][4646] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.537 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0 calico-apiserver-5687cb8cd4- calico-apiserver 736398d4-2e16-40ab-bc23-72a7bfe75126 812 0 2025-07-11 00:37:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5687cb8cd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5687cb8cd4-lfqrj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif5eafbccd09 [] [] }} ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.537 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.561 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" HandleID="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Workload="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.561 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" HandleID="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Workload="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5687cb8cd4-lfqrj", "timestamp":"2025-07-11 00:37:29.561084519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.561 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.577 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.577 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.657 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.663 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.667 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.669 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.672 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.672 [INFO][4668] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.673 [INFO][4668] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.682 [INFO][4668] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.688 [INFO][4668] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.688 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" host="localhost" Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.689 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:29.708116 containerd[1593]: 2025-07-11 00:37:29.689 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" HandleID="k8s-pod-network.1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Workload="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" Jul 11 00:37:29.708656 containerd[1593]: 2025-07-11 00:37:29.692 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0", GenerateName:"calico-apiserver-5687cb8cd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"736398d4-2e16-40ab-bc23-72a7bfe75126", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5687cb8cd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5687cb8cd4-lfqrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5eafbccd09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:29.708656 containerd[1593]: 2025-07-11 00:37:29.692 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" Jul 11 00:37:29.708656 containerd[1593]: 2025-07-11 00:37:29.692 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5eafbccd09 ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" Jul 11 00:37:29.708656 containerd[1593]: 2025-07-11 00:37:29.694 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" Jul 11 00:37:29.708656 containerd[1593]: 2025-07-11 00:37:29.694 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0", GenerateName:"calico-apiserver-5687cb8cd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"736398d4-2e16-40ab-bc23-72a7bfe75126", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5687cb8cd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a", Pod:"calico-apiserver-5687cb8cd4-lfqrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5eafbccd09", MAC:"a2:3c:17:08:06:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:29.708656 containerd[1593]: 2025-07-11 00:37:29.703 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" Namespace="calico-apiserver" Pod="calico-apiserver-5687cb8cd4-lfqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5687cb8cd4--lfqrj-eth0" Jul 11 00:37:29.732350 containerd[1593]: time="2025-07-11T00:37:29.732309354Z" level=info msg="connecting to shim 1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a" address="unix:///run/containerd/s/b40c69fee753acde115bc1648a35dc250a694a7cf05924f6b33c505e0c5234d4" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:29.762389 systemd[1]: Started cri-containerd-1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a.scope - libcontainer container 1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a. Jul 11 00:37:29.781695 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:29.819906 containerd[1593]: time="2025-07-11T00:37:29.819807621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5687cb8cd4-lfqrj,Uid:736398d4-2e16-40ab-bc23-72a7bfe75126,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a\"" Jul 11 00:37:30.029300 kubelet[2727]: I0711 00:37:30.029213 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zgbdl" podStartSLOduration=36.029197612 podStartE2EDuration="36.029197612s" podCreationTimestamp="2025-07-11 00:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:37:30.028005152 +0000 UTC m=+42.618539655" watchObservedRunningTime="2025-07-11 00:37:30.029197612 +0000 UTC m=+42.619732115" Jul 11 00:37:30.229400 systemd-networkd[1492]: cali0e644d2fb64: Gained IPv6LL Jul 11 00:37:30.495139 containerd[1593]: time="2025-07-11T00:37:30.494993895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bcfbfbd-kkd59,Uid:79fe3b9c-5a0f-474d-b227-f6f66be0d745,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:30.495587 containerd[1593]: time="2025-07-11T00:37:30.495437017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tpjr4,Uid:82c1d25a-cd9d-473e-b8bb-5e0d442fa97e,Namespace:calico-system,Attempt:0,}" Jul 11 00:37:30.590823 containerd[1593]: time="2025-07-11T00:37:30.590774888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:30.591521 containerd[1593]: time="2025-07-11T00:37:30.591493959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:37:30.597770 containerd[1593]: time="2025-07-11T00:37:30.597726353Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:30.602318 containerd[1593]: time="2025-07-11T00:37:30.602287114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:30.604049 containerd[1593]: time="2025-07-11T00:37:30.604003409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.254458552s" Jul 11 00:37:30.604049 containerd[1593]: time="2025-07-11T00:37:30.604049225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:37:30.606940 containerd[1593]: time="2025-07-11T00:37:30.606913529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:37:30.610545 containerd[1593]: time="2025-07-11T00:37:30.608258596Z" level=info msg="CreateContainer within sandbox \"6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:37:30.624805 containerd[1593]: time="2025-07-11T00:37:30.624763775Z" level=info msg="Container 99c5be10813db111ae53e144fa1d834e81d802026dd7ea834b92b4bed9cf6b23: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:30.630266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663315513.mount: Deactivated successfully. Jul 11 00:37:30.649426 containerd[1593]: time="2025-07-11T00:37:30.649353968Z" level=info msg="CreateContainer within sandbox \"6f3226a05814959714c2aa13883f51424c7c8ad1d8e3024aa0d6c3224a6949af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99c5be10813db111ae53e144fa1d834e81d802026dd7ea834b92b4bed9cf6b23\"" Jul 11 00:37:30.650090 containerd[1593]: time="2025-07-11T00:37:30.650048893Z" level=info msg="StartContainer for \"99c5be10813db111ae53e144fa1d834e81d802026dd7ea834b92b4bed9cf6b23\"" Jul 11 00:37:30.651064 containerd[1593]: time="2025-07-11T00:37:30.651010531Z" level=info msg="connecting to shim 99c5be10813db111ae53e144fa1d834e81d802026dd7ea834b92b4bed9cf6b23" address="unix:///run/containerd/s/01c4f0a905b70bfde182d6fa2abf42ddf1d857a3cb926a4fec001d557af470af" protocol=ttrpc version=3 Jul 11 00:37:30.674575 systemd[1]: Started cri-containerd-99c5be10813db111ae53e144fa1d834e81d802026dd7ea834b92b4bed9cf6b23.scope - libcontainer container 99c5be10813db111ae53e144fa1d834e81d802026dd7ea834b92b4bed9cf6b23. Jul 11 00:37:30.683042 systemd-networkd[1492]: cali354d68acfd1: Link UP Jul 11 00:37:30.684300 systemd-networkd[1492]: cali354d68acfd1: Gained carrier Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.582 [INFO][4815] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.596 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0 calico-kube-controllers-858bcfbfbd- calico-system 79fe3b9c-5a0f-474d-b227-f6f66be0d745 808 0 2025-07-11 00:37:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:858bcfbfbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-858bcfbfbd-kkd59 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali354d68acfd1 [] [] }} ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.597 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.644 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" HandleID="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Workload="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.644 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" HandleID="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Workload="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-858bcfbfbd-kkd59", "timestamp":"2025-07-11 00:37:30.644482853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.647 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.647 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.647 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.653 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.657 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.660 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.662 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.664 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.664 [INFO][4851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.666 [INFO][4851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.669 [INFO][4851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.676 [INFO][4851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.676 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" host="localhost" Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.676 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:30.700838 containerd[1593]: 2025-07-11 00:37:30.676 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" HandleID="k8s-pod-network.ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Workload="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" Jul 11 00:37:30.701427 containerd[1593]: 2025-07-11 00:37:30.679 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0", GenerateName:"calico-kube-controllers-858bcfbfbd-", Namespace:"calico-system", SelfLink:"", UID:"79fe3b9c-5a0f-474d-b227-f6f66be0d745", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bcfbfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-858bcfbfbd-kkd59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali354d68acfd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:30.701427 containerd[1593]: 2025-07-11 00:37:30.679 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" Jul 11 00:37:30.701427 containerd[1593]: 2025-07-11 00:37:30.679 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali354d68acfd1 ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" Jul 11 00:37:30.701427 containerd[1593]: 2025-07-11 00:37:30.684 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" Jul 11 00:37:30.701427 containerd[1593]: 2025-07-11 00:37:30.685 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0", GenerateName:"calico-kube-controllers-858bcfbfbd-", Namespace:"calico-system", SelfLink:"", UID:"79fe3b9c-5a0f-474d-b227-f6f66be0d745", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bcfbfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e", Pod:"calico-kube-controllers-858bcfbfbd-kkd59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali354d68acfd1", MAC:"4a:42:1c:38:b6:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:30.701427 containerd[1593]: 2025-07-11 00:37:30.698 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" Namespace="calico-system" Pod="calico-kube-controllers-858bcfbfbd-kkd59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--858bcfbfbd--kkd59-eth0" Jul 11 00:37:30.899261 containerd[1593]: time="2025-07-11T00:37:30.897623747Z" level=info msg="StartContainer for \"99c5be10813db111ae53e144fa1d834e81d802026dd7ea834b92b4bed9cf6b23\" returns successfully" Jul 11 00:37:30.905355 systemd-networkd[1492]: cali904a9294dd2: Link UP Jul 11 00:37:30.905553 systemd-networkd[1492]: cali904a9294dd2: Gained carrier Jul 11 00:37:30.911139 containerd[1593]: time="2025-07-11T00:37:30.911093792Z" level=info msg="connecting to shim ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e" address="unix:///run/containerd/s/d2f31c1b3066088b88cd4ffc4b9adf5014fc35d0c30e3c109f40b5282dc6205f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.601 [INFO][4821] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.614 [INFO][4821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0 goldmane-768f4c5c69- calico-system 82c1d25a-cd9d-473e-b8bb-5e0d442fa97e 820 0 2025-07-11 00:37:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-tpjr4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali904a9294dd2 [] [] }} ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.614 [INFO][4821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.656 [INFO][4858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" HandleID="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Workload="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.656 [INFO][4858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" HandleID="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Workload="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-tpjr4", "timestamp":"2025-07-11 00:37:30.656350497 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.657 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.676 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.676 [INFO][4858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.754 [INFO][4858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.758 [INFO][4858] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.762 [INFO][4858] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.764 [INFO][4858] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.766 [INFO][4858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.766 [INFO][4858] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.767 [INFO][4858] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.815 [INFO][4858] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.894 [INFO][4858] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.894 [INFO][4858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" host="localhost" Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.894 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:37:30.920563 containerd[1593]: 2025-07-11 00:37:30.894 [INFO][4858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" HandleID="k8s-pod-network.7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Workload="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" Jul 11 00:37:30.921057 containerd[1593]: 2025-07-11 00:37:30.900 [INFO][4821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"82c1d25a-cd9d-473e-b8bb-5e0d442fa97e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-tpjr4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali904a9294dd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:30.921057 containerd[1593]: 2025-07-11 00:37:30.900 [INFO][4821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" Jul 11 00:37:30.921057 containerd[1593]: 2025-07-11 00:37:30.900 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali904a9294dd2 ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" Jul 11 00:37:30.921057 containerd[1593]: 2025-07-11 00:37:30.905 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" Jul 11 00:37:30.921057 containerd[1593]: 2025-07-11 00:37:30.906 [INFO][4821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"82c1d25a-cd9d-473e-b8bb-5e0d442fa97e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea", Pod:"goldmane-768f4c5c69-tpjr4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali904a9294dd2", MAC:"0e:13:34:c5:6f:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:37:30.921057 containerd[1593]: 2025-07-11 00:37:30.917 [INFO][4821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" Namespace="calico-system" Pod="goldmane-768f4c5c69-tpjr4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tpjr4-eth0" Jul 11 00:37:30.940323 kubelet[2727]: I0711 00:37:30.940229 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5687cb8cd4-55jvn" podStartSLOduration=25.085477732 podStartE2EDuration="28.940213016s" podCreationTimestamp="2025-07-11 00:37:02 +0000 UTC" firstStartedPulling="2025-07-11 00:37:26.751836543 +0000 UTC m=+39.342371046" lastFinishedPulling="2025-07-11 00:37:30.606571827 +0000 UTC m=+43.197106330" observedRunningTime="2025-07-11 00:37:30.940029963 +0000 UTC m=+43.530564466" watchObservedRunningTime="2025-07-11 00:37:30.940213016 +0000 UTC m=+43.530747519" Jul 11 00:37:30.941534 systemd[1]: Started cri-containerd-ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e.scope - libcontainer container ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e. Jul 11 00:37:30.959596 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:30.960413 containerd[1593]: time="2025-07-11T00:37:30.959517205Z" level=info msg="connecting to shim 7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea" address="unix:///run/containerd/s/4f8fefa79ed992d5d5e3e0aeb77a81c3e3eee1e06d18ae57f30f95f968906a5b" namespace=k8s.io protocol=ttrpc version=3 Jul 11 00:37:30.995284 containerd[1593]: time="2025-07-11T00:37:30.995224902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bcfbfbd-kkd59,Uid:79fe3b9c-5a0f-474d-b227-f6f66be0d745,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e\"" Jul 11 00:37:31.014382 systemd[1]: Started cri-containerd-7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea.scope - libcontainer container 7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea. Jul 11 00:37:31.026593 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:37:31.055358 containerd[1593]: time="2025-07-11T00:37:31.055314439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tpjr4,Uid:82c1d25a-cd9d-473e-b8bb-5e0d442fa97e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea\"" Jul 11 00:37:31.094313 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:38988.service - OpenSSH per-connection server daemon (10.0.0.1:38988). Jul 11 00:37:31.125446 systemd-networkd[1492]: cali435b8418f14: Gained IPv6LL Jul 11 00:37:31.145423 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 38988 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:31.146882 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:31.151260 systemd-logind[1578]: New session 9 of user core. Jul 11 00:37:31.156373 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:37:31.293501 sshd[5036]: Connection closed by 10.0.0.1 port 38988 Jul 11 00:37:31.294269 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:31.298688 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:38988.service: Deactivated successfully. Jul 11 00:37:31.300747 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:37:31.302413 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:37:31.303976 systemd-logind[1578]: Removed session 9. Jul 11 00:37:31.508375 systemd-networkd[1492]: calif5eafbccd09: Gained IPv6LL Jul 11 00:37:31.892410 systemd-networkd[1492]: cali354d68acfd1: Gained IPv6LL Jul 11 00:37:31.934428 kubelet[2727]: I0711 00:37:31.934391 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:31.978765 kubelet[2727]: I0711 00:37:31.977759 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:32.021385 systemd-networkd[1492]: cali904a9294dd2: Gained IPv6LL Jul 11 00:37:32.326131 containerd[1593]: time="2025-07-11T00:37:32.326082365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:32.326816 containerd[1593]: time="2025-07-11T00:37:32.326782541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:37:32.327939 containerd[1593]: time="2025-07-11T00:37:32.327902034Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:32.329779 containerd[1593]: time="2025-07-11T00:37:32.329739557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:32.330258 containerd[1593]: time="2025-07-11T00:37:32.330206444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.721418884s" Jul 11 00:37:32.330287 containerd[1593]: time="2025-07-11T00:37:32.330263201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:37:32.331032 containerd[1593]: time="2025-07-11T00:37:32.330987181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:37:32.332388 containerd[1593]: time="2025-07-11T00:37:32.332353719Z" level=info msg="CreateContainer within sandbox \"78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:37:32.346396 containerd[1593]: time="2025-07-11T00:37:32.346361949Z" level=info msg="Container 53187f808d3dde4d77b848f6bb4cc1846a5b44b45e0263e1f3bfb4ec9f1ce4ec: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:32.361560 containerd[1593]: time="2025-07-11T00:37:32.361516294Z" level=info msg="CreateContainer within sandbox \"78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"53187f808d3dde4d77b848f6bb4cc1846a5b44b45e0263e1f3bfb4ec9f1ce4ec\"" Jul 11 00:37:32.362061 containerd[1593]: time="2025-07-11T00:37:32.362032263Z" level=info msg="StartContainer for \"53187f808d3dde4d77b848f6bb4cc1846a5b44b45e0263e1f3bfb4ec9f1ce4ec\"" Jul 11 00:37:32.363419 containerd[1593]: time="2025-07-11T00:37:32.363368503Z" level=info msg="connecting to shim 53187f808d3dde4d77b848f6bb4cc1846a5b44b45e0263e1f3bfb4ec9f1ce4ec" address="unix:///run/containerd/s/ae0b248159d0643eb270c711846a5849dcd99d91ed27e811760cbc345283966f" protocol=ttrpc version=3 Jul 11 00:37:32.414382 systemd[1]: Started cri-containerd-53187f808d3dde4d77b848f6bb4cc1846a5b44b45e0263e1f3bfb4ec9f1ce4ec.scope - libcontainer container 53187f808d3dde4d77b848f6bb4cc1846a5b44b45e0263e1f3bfb4ec9f1ce4ec. Jul 11 00:37:32.659373 containerd[1593]: time="2025-07-11T00:37:32.659046894Z" level=info msg="StartContainer for \"53187f808d3dde4d77b848f6bb4cc1846a5b44b45e0263e1f3bfb4ec9f1ce4ec\" returns successfully" Jul 11 00:37:32.684112 containerd[1593]: time="2025-07-11T00:37:32.683205686Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:32.684269 containerd[1593]: time="2025-07-11T00:37:32.684217958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:37:32.686128 containerd[1593]: time="2025-07-11T00:37:32.686089745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 355.075763ms" Jul 11 00:37:32.687766 containerd[1593]: time="2025-07-11T00:37:32.686120362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:37:32.689767 containerd[1593]: time="2025-07-11T00:37:32.689741335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:37:32.691227 containerd[1593]: time="2025-07-11T00:37:32.691203163Z" level=info msg="CreateContainer within sandbox \"1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:37:32.703602 containerd[1593]: time="2025-07-11T00:37:32.702407094Z" level=info msg="Container c076a08d591a386419f30d652b323583c3fc75bc17109e76457ac846fcaacd5b: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:32.712728 containerd[1593]: time="2025-07-11T00:37:32.712692640Z" level=info msg="CreateContainer within sandbox \"1047ce5b8c9caaace021963a327099cc4aacd805f95466123d5297183f70212a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c076a08d591a386419f30d652b323583c3fc75bc17109e76457ac846fcaacd5b\"" Jul 11 00:37:32.713541 containerd[1593]: time="2025-07-11T00:37:32.713484627Z" level=info msg="StartContainer for \"c076a08d591a386419f30d652b323583c3fc75bc17109e76457ac846fcaacd5b\"" Jul 11 00:37:32.714396 containerd[1593]: time="2025-07-11T00:37:32.714369320Z" level=info msg="connecting to shim c076a08d591a386419f30d652b323583c3fc75bc17109e76457ac846fcaacd5b" address="unix:///run/containerd/s/b40c69fee753acde115bc1648a35dc250a694a7cf05924f6b33c505e0c5234d4" protocol=ttrpc version=3 Jul 11 00:37:32.733373 systemd[1]: Started cri-containerd-c076a08d591a386419f30d652b323583c3fc75bc17109e76457ac846fcaacd5b.scope - libcontainer container c076a08d591a386419f30d652b323583c3fc75bc17109e76457ac846fcaacd5b. Jul 11 00:37:32.783344 containerd[1593]: time="2025-07-11T00:37:32.783284997Z" level=info msg="StartContainer for \"c076a08d591a386419f30d652b323583c3fc75bc17109e76457ac846fcaacd5b\" returns successfully" Jul 11 00:37:33.089596 systemd-networkd[1492]: vxlan.calico: Link UP Jul 11 00:37:33.089608 systemd-networkd[1492]: vxlan.calico: Gained carrier Jul 11 00:37:33.942946 kubelet[2727]: I0711 00:37:33.942902 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:34.837431 systemd-networkd[1492]: vxlan.calico: Gained IPv6LL Jul 11 00:37:36.273969 containerd[1593]: time="2025-07-11T00:37:36.273905118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:36.274836 containerd[1593]: time="2025-07-11T00:37:36.274800170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:37:36.276162 containerd[1593]: time="2025-07-11T00:37:36.276114308Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:36.278204 containerd[1593]: time="2025-07-11T00:37:36.278171743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:36.278745 containerd[1593]: time="2025-07-11T00:37:36.278712629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.588944664s" Jul 11 00:37:36.278784 containerd[1593]: time="2025-07-11T00:37:36.278749117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:37:36.279743 containerd[1593]: time="2025-07-11T00:37:36.279707116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:37:36.297745 containerd[1593]: time="2025-07-11T00:37:36.297465622Z" level=info msg="CreateContainer within sandbox \"ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:37:36.307484 containerd[1593]: time="2025-07-11T00:37:36.307455175Z" level=info msg="Container 73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:36.316859 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:36198.service - OpenSSH per-connection server daemon (10.0.0.1:36198). Jul 11 00:37:36.321740 containerd[1593]: time="2025-07-11T00:37:36.321713767Z" level=info msg="CreateContainer within sandbox \"ea3c8d593b68f59ce968ddeb301698cded82fe25f9d599b4e20426e271d4dc3e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2\"" Jul 11 00:37:36.322437 containerd[1593]: time="2025-07-11T00:37:36.322402190Z" level=info msg="StartContainer for \"73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2\"" Jul 11 00:37:36.323515 containerd[1593]: time="2025-07-11T00:37:36.323483992Z" level=info msg="connecting to shim 73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2" address="unix:///run/containerd/s/d2f31c1b3066088b88cd4ffc4b9adf5014fc35d0c30e3c109f40b5282dc6205f" protocol=ttrpc version=3 Jul 11 00:37:36.350013 systemd[1]: Started cri-containerd-73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2.scope - libcontainer container 73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2. Jul 11 00:37:36.382128 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 36198 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:36.384482 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:36.391283 systemd-logind[1578]: New session 10 of user core. Jul 11 00:37:36.396466 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:37:36.414151 containerd[1593]: time="2025-07-11T00:37:36.414101918Z" level=info msg="StartContainer for \"73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2\" returns successfully" Jul 11 00:37:36.673256 sshd[5323]: Connection closed by 10.0.0.1 port 36198 Jul 11 00:37:36.673522 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:36.688228 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:36198.service: Deactivated successfully. Jul 11 00:37:36.690124 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:37:36.690918 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:37:36.693442 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:36206.service - OpenSSH per-connection server daemon (10.0.0.1:36206). Jul 11 00:37:36.694246 systemd-logind[1578]: Removed session 10. Jul 11 00:37:36.736781 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 36206 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:36.738123 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:36.742508 systemd-logind[1578]: New session 11 of user core. Jul 11 00:37:36.752362 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:37:36.998224 kubelet[2727]: I0711 00:37:36.997749 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-858bcfbfbd-kkd59" podStartSLOduration=26.714404514 podStartE2EDuration="31.99773292s" podCreationTimestamp="2025-07-11 00:37:05 +0000 UTC" firstStartedPulling="2025-07-11 00:37:30.996219341 +0000 UTC m=+43.586753844" lastFinishedPulling="2025-07-11 00:37:36.279547747 +0000 UTC m=+48.870082250" observedRunningTime="2025-07-11 00:37:36.997064715 +0000 UTC m=+49.587599209" watchObservedRunningTime="2025-07-11 00:37:36.99773292 +0000 UTC m=+49.588267423" Jul 11 00:37:36.998983 kubelet[2727]: I0711 00:37:36.998914 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5687cb8cd4-lfqrj" podStartSLOduration=32.13529958 podStartE2EDuration="34.998906014s" podCreationTimestamp="2025-07-11 00:37:02 +0000 UTC" firstStartedPulling="2025-07-11 00:37:29.824791337 +0000 UTC m=+42.415325840" lastFinishedPulling="2025-07-11 00:37:32.688397771 +0000 UTC m=+45.278932274" observedRunningTime="2025-07-11 00:37:32.950113547 +0000 UTC m=+45.540648040" watchObservedRunningTime="2025-07-11 00:37:36.998906014 +0000 UTC m=+49.589440517" Jul 11 00:37:37.004112 sshd[5353]: Connection closed by 10.0.0.1 port 36206 Jul 11 00:37:37.001726 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:37.013605 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:36206.service: Deactivated successfully. Jul 11 00:37:37.017305 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:37:37.020269 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:37:37.024335 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:36222.service - OpenSSH per-connection server daemon (10.0.0.1:36222). Jul 11 00:37:37.025652 systemd-logind[1578]: Removed session 11. Jul 11 00:37:37.069577 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 36222 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:37.070945 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:37.076289 systemd-logind[1578]: New session 12 of user core. Jul 11 00:37:37.083376 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:37:37.199383 sshd[5370]: Connection closed by 10.0.0.1 port 36222 Jul 11 00:37:37.199737 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:37.204284 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:36222.service: Deactivated successfully. Jul 11 00:37:37.206359 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:37:37.207081 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:37:37.208302 systemd-logind[1578]: Removed session 12. Jul 11 00:37:37.960138 kubelet[2727]: I0711 00:37:37.960101 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:38.358586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1186331323.mount: Deactivated successfully. Jul 11 00:37:39.227630 containerd[1593]: time="2025-07-11T00:37:39.227584404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:39.228754 containerd[1593]: time="2025-07-11T00:37:39.228678930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:37:39.230041 containerd[1593]: time="2025-07-11T00:37:39.229997907Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:39.231905 containerd[1593]: time="2025-07-11T00:37:39.231877887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:39.232594 containerd[1593]: time="2025-07-11T00:37:39.232569746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.952832323s" Jul 11 00:37:39.232647 containerd[1593]: time="2025-07-11T00:37:39.232597519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:37:39.234293 containerd[1593]: time="2025-07-11T00:37:39.234268487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:37:39.235264 containerd[1593]: time="2025-07-11T00:37:39.235204855Z" level=info msg="CreateContainer within sandbox \"7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:37:39.242870 containerd[1593]: time="2025-07-11T00:37:39.242822360Z" level=info msg="Container 9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:39.252112 containerd[1593]: time="2025-07-11T00:37:39.252067561Z" level=info msg="CreateContainer within sandbox \"7790e3818c902fb755b8b2f9a707ae5796ea2d4e23faac4c06cab01bb0daffea\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\"" Jul 11 00:37:39.252653 containerd[1593]: time="2025-07-11T00:37:39.252621351Z" level=info msg="StartContainer for \"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\"" Jul 11 00:37:39.253854 containerd[1593]: time="2025-07-11T00:37:39.253819221Z" level=info msg="connecting to shim 9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8" address="unix:///run/containerd/s/4f8fefa79ed992d5d5e3e0aeb77a81c3e3eee1e06d18ae57f30f95f968906a5b" protocol=ttrpc version=3 Jul 11 00:37:39.284392 systemd[1]: Started cri-containerd-9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8.scope - libcontainer container 9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8. Jul 11 00:37:39.332120 containerd[1593]: time="2025-07-11T00:37:39.332048617Z" level=info msg="StartContainer for \"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\" returns successfully" Jul 11 00:37:39.977687 kubelet[2727]: I0711 00:37:39.977423 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-tpjr4" podStartSLOduration=26.800619924 podStartE2EDuration="34.977388927s" podCreationTimestamp="2025-07-11 00:37:05 +0000 UTC" firstStartedPulling="2025-07-11 00:37:31.056704611 +0000 UTC m=+43.647239114" lastFinishedPulling="2025-07-11 00:37:39.233473604 +0000 UTC m=+51.824008117" observedRunningTime="2025-07-11 00:37:39.976544712 +0000 UTC m=+52.567079215" watchObservedRunningTime="2025-07-11 00:37:39.977388927 +0000 UTC m=+52.567923430" Jul 11 00:37:40.662650 containerd[1593]: time="2025-07-11T00:37:40.662605695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\" id:\"dab1c25664bf4e38d604ed8e60f8a66e4b270c47acff86cdd7521a825fbd59d3\" pid:5454 exit_status:1 exited_at:{seconds:1752194260 nanos:662246782}" Jul 11 00:37:40.744474 containerd[1593]: time="2025-07-11T00:37:40.744430664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\" id:\"15c4bbff8813441d115dd743dffb0fee807db103b085edd1b5f6bed4aae1439b\" pid:5480 exit_status:1 exited_at:{seconds:1752194260 nanos:744169334}" Jul 11 00:37:41.040442 kubelet[2727]: I0711 00:37:41.040290 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:41.041701 containerd[1593]: time="2025-07-11T00:37:41.041649701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\" id:\"4185a9287e988d93aa9242f41441a861421f3c4296d367ffabef31cce4b7a7c3\" pid:5503 exit_status:1 exited_at:{seconds:1752194261 nanos:41375617}" Jul 11 00:37:41.090670 containerd[1593]: time="2025-07-11T00:37:41.090631824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2\" id:\"5ba7efd713776c5d7a7765fa0fa9f98f2380aa8c6abb7ba8d3c0b5ea861a3624\" pid:5529 exited_at:{seconds:1752194261 nanos:90428672}" Jul 11 00:37:41.130649 containerd[1593]: time="2025-07-11T00:37:41.130598138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2\" id:\"f49874ee82912c95a3e62e23d7aa7bc8b422595c568cddeaa871264a275adb66\" pid:5552 exited_at:{seconds:1752194261 nanos:130391329}" Jul 11 00:37:42.059888 containerd[1593]: time="2025-07-11T00:37:42.059652420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\" id:\"456629ef582d616f0a2100bb54ba3c6609a210f2b66faabf79507c6835dff7ab\" pid:5576 exit_status:1 exited_at:{seconds:1752194262 nanos:58932368}" Jul 11 00:37:42.216463 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:36228.service - OpenSSH per-connection server daemon (10.0.0.1:36228). Jul 11 00:37:42.309391 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 36228 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:42.311364 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:42.317473 systemd-logind[1578]: New session 13 of user core. Jul 11 00:37:42.326398 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:37:42.453808 containerd[1593]: time="2025-07-11T00:37:42.453748143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:42.454769 containerd[1593]: time="2025-07-11T00:37:42.454538386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:37:42.455837 containerd[1593]: time="2025-07-11T00:37:42.455785849Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:42.458390 containerd[1593]: time="2025-07-11T00:37:42.458332751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:37:42.459019 containerd[1593]: time="2025-07-11T00:37:42.458965679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.224669069s" Jul 11 00:37:42.459019 containerd[1593]: time="2025-07-11T00:37:42.458997038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:37:42.462159 containerd[1593]: time="2025-07-11T00:37:42.462115123Z" level=info msg="CreateContainer within sandbox \"78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:37:42.478963 containerd[1593]: time="2025-07-11T00:37:42.478923632Z" level=info msg="Container c465ee6b9cea53c0cbb0bfc040ff4854485b719119582e71b5d3a086c7b01341: CDI devices from CRI Config.CDIDevices: []" Jul 11 00:37:42.489059 containerd[1593]: time="2025-07-11T00:37:42.489022152Z" level=info msg="CreateContainer within sandbox \"78366dca1a059661b84954fd531d178cfbb3003d2c6bb7d1a68f72e836c3ae55\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c465ee6b9cea53c0cbb0bfc040ff4854485b719119582e71b5d3a086c7b01341\"" Jul 11 00:37:42.489735 containerd[1593]: time="2025-07-11T00:37:42.489641435Z" level=info msg="StartContainer for \"c465ee6b9cea53c0cbb0bfc040ff4854485b719119582e71b5d3a086c7b01341\"" Jul 11 00:37:42.491222 containerd[1593]: time="2025-07-11T00:37:42.491189703Z" level=info msg="connecting to shim c465ee6b9cea53c0cbb0bfc040ff4854485b719119582e71b5d3a086c7b01341" address="unix:///run/containerd/s/ae0b248159d0643eb270c711846a5849dcd99d91ed27e811760cbc345283966f" protocol=ttrpc version=3 Jul 11 00:37:42.517530 systemd[1]: Started cri-containerd-c465ee6b9cea53c0cbb0bfc040ff4854485b719119582e71b5d3a086c7b01341.scope - libcontainer container c465ee6b9cea53c0cbb0bfc040ff4854485b719119582e71b5d3a086c7b01341. Jul 11 00:37:42.540195 sshd[5595]: Connection closed by 10.0.0.1 port 36228 Jul 11 00:37:42.540510 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:42.544935 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:36228.service: Deactivated successfully. Jul 11 00:37:42.547421 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:37:42.550538 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:37:42.552417 systemd-logind[1578]: Removed session 13. Jul 11 00:37:42.555445 kubelet[2727]: I0711 00:37:42.555398 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:37:42.570845 containerd[1593]: time="2025-07-11T00:37:42.570179411Z" level=info msg="StartContainer for \"c465ee6b9cea53c0cbb0bfc040ff4854485b719119582e71b5d3a086c7b01341\" returns successfully" Jul 11 00:37:43.563978 kubelet[2727]: I0711 00:37:43.563938 2727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:37:43.563978 kubelet[2727]: I0711 00:37:43.563970 2727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:37:47.553792 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:49828.service - OpenSSH per-connection server daemon (10.0.0.1:49828). Jul 11 00:37:47.601135 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 49828 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:47.602484 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:47.606792 systemd-logind[1578]: New session 14 of user core. Jul 11 00:37:47.616381 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:37:47.742657 sshd[5662]: Connection closed by 10.0.0.1 port 49828 Jul 11 00:37:47.742952 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:47.747488 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:49828.service: Deactivated successfully. Jul 11 00:37:47.749628 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:37:47.750528 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:37:47.751676 systemd-logind[1578]: Removed session 14. Jul 11 00:37:52.742386 containerd[1593]: time="2025-07-11T00:37:52.742335021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\" id:\"ffad53321441554ad67339554b42ac45deaa456a62e08eb3a8ccf435172cc856\" pid:5686 exited_at:{seconds:1752194272 nanos:741959643}" Jul 11 00:37:52.753068 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:49832.service - OpenSSH per-connection server daemon (10.0.0.1:49832). Jul 11 00:37:52.815518 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 49832 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:52.816844 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:52.821014 systemd-logind[1578]: New session 15 of user core. Jul 11 00:37:52.830351 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:37:52.956002 sshd[5701]: Connection closed by 10.0.0.1 port 49832 Jul 11 00:37:52.956336 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:52.960720 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:49832.service: Deactivated successfully. Jul 11 00:37:52.962812 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:37:52.963629 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:37:52.964821 systemd-logind[1578]: Removed session 15. Jul 11 00:37:57.969127 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:34056.service - OpenSSH per-connection server daemon (10.0.0.1:34056). Jul 11 00:37:58.024411 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 34056 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:58.025925 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:58.030231 systemd-logind[1578]: New session 16 of user core. Jul 11 00:37:58.036384 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:37:58.143075 sshd[5729]: Connection closed by 10.0.0.1 port 34056 Jul 11 00:37:58.143389 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:58.156783 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:34056.service: Deactivated successfully. Jul 11 00:37:58.158516 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:37:58.159331 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:37:58.162698 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:34060.service - OpenSSH per-connection server daemon (10.0.0.1:34060). Jul 11 00:37:58.163391 systemd-logind[1578]: Removed session 16. Jul 11 00:37:58.209132 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 34060 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:58.210357 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:58.214562 systemd-logind[1578]: New session 17 of user core. Jul 11 00:37:58.224374 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:37:58.426051 sshd[5745]: Connection closed by 10.0.0.1 port 34060 Jul 11 00:37:58.426522 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:58.435494 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:34060.service: Deactivated successfully. Jul 11 00:37:58.437454 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:37:58.438205 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:37:58.441601 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:34064.service - OpenSSH per-connection server daemon (10.0.0.1:34064). Jul 11 00:37:58.442280 systemd-logind[1578]: Removed session 17. Jul 11 00:37:58.492878 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 34064 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:58.494406 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:58.498602 systemd-logind[1578]: New session 18 of user core. Jul 11 00:37:58.509354 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:37:58.986562 containerd[1593]: time="2025-07-11T00:37:58.986498896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cd02b748ff8506573139707930b0734dacaf6f59ac204f90f8dd1c67bba91ac\" id:\"fde5835f8818bee3093c14417f690e443e87842a94eded1f7b5628b64f8a8397\" pid:5778 exited_at:{seconds:1752194278 nanos:986212895}" Jul 11 00:37:59.001700 kubelet[2727]: I0711 00:37:59.001588 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lgrvh" podStartSLOduration=41.223755172 podStartE2EDuration="54.001566282s" podCreationTimestamp="2025-07-11 00:37:05 +0000 UTC" firstStartedPulling="2025-07-11 00:37:29.682073113 +0000 UTC m=+42.272607616" lastFinishedPulling="2025-07-11 00:37:42.459884223 +0000 UTC m=+55.050418726" observedRunningTime="2025-07-11 00:37:43.178456066 +0000 UTC m=+55.768990569" watchObservedRunningTime="2025-07-11 00:37:59.001566282 +0000 UTC m=+71.592100785" Jul 11 00:37:59.240976 sshd[5759]: Connection closed by 10.0.0.1 port 34064 Jul 11 00:37:59.242420 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Jul 11 00:37:59.252490 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:34064.service: Deactivated successfully. Jul 11 00:37:59.256477 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:37:59.257990 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:37:59.261279 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:34068.service - OpenSSH per-connection server daemon (10.0.0.1:34068). Jul 11 00:37:59.262220 systemd-logind[1578]: Removed session 18. Jul 11 00:37:59.311068 sshd[5803]: Accepted publickey for core from 10.0.0.1 port 34068 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:37:59.312597 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:37:59.317114 systemd-logind[1578]: New session 19 of user core. Jul 11 00:37:59.324366 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:37:59.996340 sshd[5805]: Connection closed by 10.0.0.1 port 34068 Jul 11 00:37:59.996778 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Jul 11 00:38:00.009159 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:34068.service: Deactivated successfully. Jul 11 00:38:00.011726 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:38:00.012850 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:38:00.016758 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:34084.service - OpenSSH per-connection server daemon (10.0.0.1:34084). Jul 11 00:38:00.017745 systemd-logind[1578]: Removed session 19. Jul 11 00:38:00.069745 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 34084 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:38:00.071271 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:38:00.075699 systemd-logind[1578]: New session 20 of user core. Jul 11 00:38:00.085362 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:38:00.196281 sshd[5818]: Connection closed by 10.0.0.1 port 34084 Jul 11 00:38:00.196565 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Jul 11 00:38:00.200753 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:34084.service: Deactivated successfully. Jul 11 00:38:00.202523 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:38:00.203257 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:38:00.204443 systemd-logind[1578]: Removed session 20. Jul 11 00:38:03.433289 containerd[1593]: time="2025-07-11T00:38:03.433251328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2\" id:\"384f367c2ca03733b43863c90333b962c1c5ecc8b0810bff44dc35dc1a4692f5\" pid:5843 exited_at:{seconds:1752194283 nanos:433005464}" Jul 11 00:38:05.216078 systemd[1]: Started sshd@20-10.0.0.141:22-10.0.0.1:34090.service - OpenSSH per-connection server daemon (10.0.0.1:34090). Jul 11 00:38:05.263274 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 34090 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:38:05.264555 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:38:05.268395 systemd-logind[1578]: New session 21 of user core. Jul 11 00:38:05.279361 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:38:05.399815 sshd[5857]: Connection closed by 10.0.0.1 port 34090 Jul 11 00:38:05.400305 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Jul 11 00:38:05.404584 systemd[1]: sshd@20-10.0.0.141:22-10.0.0.1:34090.service: Deactivated successfully. Jul 11 00:38:05.406619 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:38:05.407419 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:38:05.408685 systemd-logind[1578]: Removed session 21. Jul 11 00:38:10.412135 systemd[1]: Started sshd@21-10.0.0.141:22-10.0.0.1:40502.service - OpenSSH per-connection server daemon (10.0.0.1:40502). Jul 11 00:38:10.471614 sshd[5872]: Accepted publickey for core from 10.0.0.1 port 40502 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:38:10.473070 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:38:10.477542 systemd-logind[1578]: New session 22 of user core. Jul 11 00:38:10.497386 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:38:10.942811 sshd[5874]: Connection closed by 10.0.0.1 port 40502 Jul 11 00:38:10.943280 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Jul 11 00:38:10.947569 systemd[1]: sshd@21-10.0.0.141:22-10.0.0.1:40502.service: Deactivated successfully. Jul 11 00:38:10.949615 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:38:10.950428 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:38:10.951511 systemd-logind[1578]: Removed session 22. Jul 11 00:38:11.131795 containerd[1593]: time="2025-07-11T00:38:11.131717412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73ee7db4a552e8eaca46bc909e3cf50cfc490b04c485d431a877eb9c8c3c8dc2\" id:\"2a1cfe5b9027c37524b2bf897345edb459638b46c4fa5b49ebef216c093f69e9\" pid:5899 exited_at:{seconds:1752194291 nanos:131517791}" Jul 11 00:38:12.057152 containerd[1593]: time="2025-07-11T00:38:12.057107346Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9752f779afd7477ba0eb4b81787c35c7b09b8eddd40ea7c930511287b7ae7bd8\" id:\"85b9600c88e378ba974537a5b8ba8b505069ce9732f01b6af39da7ed980130ad\" pid:5921 exited_at:{seconds:1752194292 nanos:56723492}" Jul 11 00:38:15.956285 systemd[1]: Started sshd@22-10.0.0.141:22-10.0.0.1:53160.service - OpenSSH per-connection server daemon (10.0.0.1:53160). Jul 11 00:38:16.005426 sshd[5944]: Accepted publickey for core from 10.0.0.1 port 53160 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:38:16.006729 sshd-session[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:38:16.011035 systemd-logind[1578]: New session 23 of user core. Jul 11 00:38:16.017352 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:38:16.134573 sshd[5946]: Connection closed by 10.0.0.1 port 53160 Jul 11 00:38:16.134871 sshd-session[5944]: pam_unix(sshd:session): session closed for user core Jul 11 00:38:16.139143 systemd[1]: sshd@22-10.0.0.141:22-10.0.0.1:53160.service: Deactivated successfully. Jul 11 00:38:16.141137 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:38:16.142027 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:38:16.143142 systemd-logind[1578]: Removed session 23. Jul 11 00:38:21.151150 systemd[1]: Started sshd@23-10.0.0.141:22-10.0.0.1:53166.service - OpenSSH per-connection server daemon (10.0.0.1:53166). Jul 11 00:38:21.217614 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 53166 ssh2: RSA SHA256:hoaiEKDUTRfaHyPAvRLkVzFzRpxivuut7MCf1tDGhBQ Jul 11 00:38:21.219274 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:38:21.223532 systemd-logind[1578]: New session 24 of user core. Jul 11 00:38:21.231347 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:38:21.437498 sshd[5961]: Connection closed by 10.0.0.1 port 53166 Jul 11 00:38:21.437943 sshd-session[5959]: pam_unix(sshd:session): session closed for user core Jul 11 00:38:21.442283 systemd[1]: sshd@23-10.0.0.141:22-10.0.0.1:53166.service: Deactivated successfully. Jul 11 00:38:21.444340 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:38:21.445104 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:38:21.446359 systemd-logind[1578]: Removed session 24.