Jul 11 05:25:02.878202 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 03:36:05 -00 2025 Jul 11 05:25:02.878223 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:25:02.878235 kernel: BIOS-provided physical RAM map: Jul 11 05:25:02.878241 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 11 05:25:02.878247 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 11 05:25:02.878254 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 11 05:25:02.878261 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 11 05:25:02.878268 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 11 05:25:02.878277 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 11 05:25:02.878283 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 11 05:25:02.878290 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 11 05:25:02.878300 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 11 05:25:02.878310 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 11 05:25:02.878324 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 11 05:25:02.878350 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 11 05:25:02.878365 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 11 05:25:02.878376 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 11 05:25:02.878383 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 11 05:25:02.878389 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 11 05:25:02.878396 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 11 05:25:02.878403 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 11 05:25:02.878410 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 11 05:25:02.878417 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 11 05:25:02.878423 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 05:25:02.878430 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 11 05:25:02.878439 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 05:25:02.878446 kernel: NX (Execute Disable) protection: active Jul 11 05:25:02.878453 kernel: APIC: Static calls initialized Jul 11 05:25:02.878460 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 11 05:25:02.878467 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 11 05:25:02.878473 kernel: extended physical RAM map: Jul 11 05:25:02.878480 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 11 05:25:02.878487 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 11 05:25:02.878494 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 11 05:25:02.878501 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 11 05:25:02.878508 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 11 05:25:02.878517 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 11 05:25:02.878524 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 11 05:25:02.878531 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 11 05:25:02.878538 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 11 05:25:02.878548 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 11 05:25:02.878555 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 11 05:25:02.878565 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 11 05:25:02.878572 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 11 05:25:02.878579 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 11 05:25:02.878586 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 11 05:25:02.878593 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 11 05:25:02.878600 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 11 05:25:02.878607 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 11 05:25:02.878623 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 11 05:25:02.878631 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 11 05:25:02.878646 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 11 05:25:02.878655 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 11 05:25:02.878662 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 11 05:25:02.878669 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 11 05:25:02.878676 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 05:25:02.878684 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 11 05:25:02.878691 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 05:25:02.878698 kernel: efi: EFI v2.7 by EDK II Jul 11 05:25:02.878705 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 11 05:25:02.878712 kernel: random: crng init done Jul 11 05:25:02.878719 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 11 05:25:02.878727 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 11 05:25:02.878739 kernel: secureboot: Secure boot disabled Jul 11 05:25:02.878748 kernel: SMBIOS 2.8 present. Jul 11 05:25:02.878757 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 11 05:25:02.878767 kernel: DMI: Memory slots populated: 1/1 Jul 11 05:25:02.878776 kernel: Hypervisor detected: KVM Jul 11 05:25:02.878785 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 05:25:02.878795 kernel: kvm-clock: using sched offset of 4095917672 cycles Jul 11 05:25:02.878804 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 05:25:02.878812 kernel: tsc: Detected 2794.750 MHz processor Jul 11 05:25:02.878820 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 05:25:02.878828 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 05:25:02.878839 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 11 05:25:02.878846 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 11 05:25:02.878854 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 05:25:02.878861 kernel: Using GB pages for direct mapping Jul 11 05:25:02.878868 kernel: ACPI: Early table checksum verification disabled Jul 11 05:25:02.878876 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 11 05:25:02.878883 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 11 05:25:02.878891 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:25:02.878898 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:25:02.878908 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 11 05:25:02.878915 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:25:02.878922 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:25:02.878930 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:25:02.878937 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:25:02.878944 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 11 05:25:02.878952 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 11 05:25:02.878959 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 11 05:25:02.878969 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 11 05:25:02.878976 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 11 05:25:02.878996 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 11 05:25:02.879003 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 11 05:25:02.879010 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 11 05:25:02.879018 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 11 05:25:02.879025 kernel: No NUMA configuration found Jul 11 05:25:02.879032 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 11 05:25:02.879039 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 11 05:25:02.879047 kernel: Zone ranges: Jul 11 05:25:02.879057 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 05:25:02.879064 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 11 05:25:02.879071 kernel: Normal empty Jul 11 05:25:02.879079 kernel: Device empty Jul 11 05:25:02.879086 kernel: Movable zone start for each node Jul 11 05:25:02.879093 kernel: Early memory node ranges Jul 11 05:25:02.879100 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 11 05:25:02.879108 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 11 05:25:02.879115 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 11 05:25:02.879126 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 11 05:25:02.879134 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 11 05:25:02.879143 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 11 05:25:02.879152 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 11 05:25:02.879159 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 11 05:25:02.879166 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 11 05:25:02.879173 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 05:25:02.879181 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 11 05:25:02.879198 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 11 05:25:02.879205 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 05:25:02.879213 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 11 05:25:02.879220 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 11 05:25:02.879230 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 11 05:25:02.879238 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 11 05:25:02.879245 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 11 05:25:02.879253 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 05:25:02.879260 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 05:25:02.879270 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 05:25:02.879278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 05:25:02.879285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 05:25:02.879293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 05:25:02.879301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 05:25:02.879308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 05:25:02.879316 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 05:25:02.879323 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 05:25:02.879331 kernel: TSC deadline timer available Jul 11 05:25:02.879358 kernel: CPU topo: Max. logical packages: 1 Jul 11 05:25:02.879367 kernel: CPU topo: Max. logical dies: 1 Jul 11 05:25:02.879374 kernel: CPU topo: Max. dies per package: 1 Jul 11 05:25:02.879381 kernel: CPU topo: Max. threads per core: 1 Jul 11 05:25:02.879389 kernel: CPU topo: Num. cores per package: 4 Jul 11 05:25:02.879396 kernel: CPU topo: Num. threads per package: 4 Jul 11 05:25:02.879404 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 11 05:25:02.879411 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 05:25:02.879419 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 05:25:02.879429 kernel: kvm-guest: setup PV sched yield Jul 11 05:25:02.879437 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 11 05:25:02.879444 kernel: Booting paravirtualized kernel on KVM Jul 11 05:25:02.879452 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 05:25:02.879459 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 05:25:02.879467 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 11 05:25:02.879475 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 11 05:25:02.879482 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 05:25:02.879489 kernel: kvm-guest: PV spinlocks enabled Jul 11 05:25:02.879499 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 05:25:02.879508 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:25:02.879516 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 05:25:02.879524 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 05:25:02.879531 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 05:25:02.879539 kernel: Fallback order for Node 0: 0 Jul 11 05:25:02.879546 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 11 05:25:02.879554 kernel: Policy zone: DMA32 Jul 11 05:25:02.879564 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 05:25:02.879572 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 05:25:02.879579 kernel: ftrace: allocating 40097 entries in 157 pages Jul 11 05:25:02.879587 kernel: ftrace: allocated 157 pages with 5 groups Jul 11 05:25:02.879595 kernel: Dynamic Preempt: voluntary Jul 11 05:25:02.879602 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 05:25:02.879614 kernel: rcu: RCU event tracing is enabled. Jul 11 05:25:02.879622 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 05:25:02.879630 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 05:25:02.879638 kernel: Rude variant of Tasks RCU enabled. Jul 11 05:25:02.879648 kernel: Tracing variant of Tasks RCU enabled. Jul 11 05:25:02.879656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 05:25:02.879663 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 05:25:02.879671 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:25:02.879679 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:25:02.879687 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:25:02.879694 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 05:25:02.879702 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 05:25:02.879712 kernel: Console: colour dummy device 80x25 Jul 11 05:25:02.879720 kernel: printk: legacy console [ttyS0] enabled Jul 11 05:25:02.879727 kernel: ACPI: Core revision 20240827 Jul 11 05:25:02.879735 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 05:25:02.879742 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 05:25:02.879750 kernel: x2apic enabled Jul 11 05:25:02.879767 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 05:25:02.879775 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 05:25:02.879783 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 05:25:02.879804 kernel: kvm-guest: setup PV IPIs Jul 11 05:25:02.879815 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 05:25:02.879823 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 11 05:25:02.879831 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 11 05:25:02.879839 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 05:25:02.879846 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 05:25:02.879854 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 05:25:02.879862 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 05:25:02.879873 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 05:25:02.879884 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 05:25:02.879892 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 05:25:02.879900 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 05:25:02.879907 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 05:25:02.879915 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 05:25:02.879923 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 05:25:02.879931 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 05:25:02.879939 kernel: x86/bugs: return thunk changed Jul 11 05:25:02.879947 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 05:25:02.879957 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 05:25:02.879964 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 05:25:02.879972 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 05:25:02.880062 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 05:25:02.880071 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 05:25:02.880079 kernel: Freeing SMP alternatives memory: 32K Jul 11 05:25:02.880089 kernel: pid_max: default: 32768 minimum: 301 Jul 11 05:25:02.880098 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 05:25:02.880107 kernel: landlock: Up and running. Jul 11 05:25:02.880118 kernel: SELinux: Initializing. Jul 11 05:25:02.880126 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:25:02.880134 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:25:02.880141 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 05:25:02.880149 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 05:25:02.880157 kernel: ... version: 0 Jul 11 05:25:02.880164 kernel: ... bit width: 48 Jul 11 05:25:02.880172 kernel: ... generic registers: 6 Jul 11 05:25:02.880179 kernel: ... value mask: 0000ffffffffffff Jul 11 05:25:02.880189 kernel: ... max period: 00007fffffffffff Jul 11 05:25:02.880196 kernel: ... fixed-purpose events: 0 Jul 11 05:25:02.880204 kernel: ... event mask: 000000000000003f Jul 11 05:25:02.880211 kernel: signal: max sigframe size: 1776 Jul 11 05:25:02.880219 kernel: rcu: Hierarchical SRCU implementation. Jul 11 05:25:02.880227 kernel: rcu: Max phase no-delay instances is 400. Jul 11 05:25:02.880234 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 05:25:02.880242 kernel: smp: Bringing up secondary CPUs ... Jul 11 05:25:02.880249 kernel: smpboot: x86: Booting SMP configuration: Jul 11 05:25:02.880259 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 05:25:02.880266 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 05:25:02.880274 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 11 05:25:02.880282 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54620K init, 2348K bss, 137196K reserved, 0K cma-reserved) Jul 11 05:25:02.880290 kernel: devtmpfs: initialized Jul 11 05:25:02.880297 kernel: x86/mm: Memory block size: 128MB Jul 11 05:25:02.880305 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 11 05:25:02.880313 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 11 05:25:02.880320 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 11 05:25:02.880330 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 11 05:25:02.880346 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 11 05:25:02.880354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 11 05:25:02.880362 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 05:25:02.880370 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 05:25:02.880377 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 05:25:02.880385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 05:25:02.880393 kernel: audit: initializing netlink subsys (disabled) Jul 11 05:25:02.880403 kernel: audit: type=2000 audit(1752211500.126:1): state=initialized audit_enabled=0 res=1 Jul 11 05:25:02.880410 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 05:25:02.880418 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 05:25:02.880425 kernel: cpuidle: using governor menu Jul 11 05:25:02.880433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 05:25:02.880440 kernel: dca service started, version 1.12.1 Jul 11 05:25:02.880448 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 11 05:25:02.880456 kernel: PCI: Using configuration type 1 for base access Jul 11 05:25:02.880463 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 05:25:02.880473 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 05:25:02.880481 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 05:25:02.880488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 05:25:02.880496 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 05:25:02.880503 kernel: ACPI: Added _OSI(Module Device) Jul 11 05:25:02.880511 kernel: ACPI: Added _OSI(Processor Device) Jul 11 05:25:02.880518 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 05:25:02.880526 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 05:25:02.880533 kernel: ACPI: Interpreter enabled Jul 11 05:25:02.880541 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 05:25:02.880550 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 05:25:02.880558 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 05:25:02.880566 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 05:25:02.880573 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 05:25:02.880581 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 05:25:02.880814 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 05:25:02.880939 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 05:25:02.881076 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 05:25:02.881087 kernel: PCI host bridge to bus 0000:00 Jul 11 05:25:02.881242 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 05:25:02.881362 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 05:25:02.881473 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 05:25:02.881579 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 11 05:25:02.881693 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 11 05:25:02.881803 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 11 05:25:02.881909 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 05:25:02.882116 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 11 05:25:02.882253 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 11 05:25:02.882381 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 11 05:25:02.882497 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 11 05:25:02.882617 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 11 05:25:02.882733 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 05:25:02.882870 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 05:25:02.883004 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 11 05:25:02.883124 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 11 05:25:02.883241 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 11 05:25:02.883387 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 11 05:25:02.883511 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 11 05:25:02.883628 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 11 05:25:02.883744 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 11 05:25:02.883896 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 11 05:25:02.884031 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 11 05:25:02.884153 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 11 05:25:02.884268 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 11 05:25:02.884401 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 11 05:25:02.884539 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 11 05:25:02.884656 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 05:25:02.884786 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 11 05:25:02.884902 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 11 05:25:02.885034 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 11 05:25:02.885173 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 11 05:25:02.885296 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 11 05:25:02.885307 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 05:25:02.885315 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 05:25:02.885323 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 05:25:02.885330 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 05:25:02.885346 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 05:25:02.885354 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 05:25:02.885363 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 05:25:02.885373 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 05:25:02.885381 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 05:25:02.885389 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 05:25:02.885397 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 05:25:02.885404 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 05:25:02.885412 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 05:25:02.885420 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 05:25:02.885428 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 05:25:02.885435 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 05:25:02.885446 kernel: iommu: Default domain type: Translated Jul 11 05:25:02.885454 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 05:25:02.885461 kernel: efivars: Registered efivars operations Jul 11 05:25:02.885469 kernel: PCI: Using ACPI for IRQ routing Jul 11 05:25:02.885477 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 05:25:02.885485 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 11 05:25:02.885492 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 11 05:25:02.885500 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 11 05:25:02.885508 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 11 05:25:02.885517 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 11 05:25:02.885525 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 11 05:25:02.885533 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 11 05:25:02.885541 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 11 05:25:02.885660 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 05:25:02.885776 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 05:25:02.885891 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 05:25:02.885902 kernel: vgaarb: loaded Jul 11 05:25:02.885913 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 05:25:02.885921 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 05:25:02.885928 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 05:25:02.885936 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 05:25:02.885944 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 05:25:02.885952 kernel: pnp: PnP ACPI init Jul 11 05:25:02.886118 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 11 05:25:02.886147 kernel: pnp: PnP ACPI: found 6 devices Jul 11 05:25:02.886160 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 05:25:02.886168 kernel: NET: Registered PF_INET protocol family Jul 11 05:25:02.886176 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 05:25:02.886184 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 05:25:02.886192 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 05:25:02.886200 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 05:25:02.886208 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 05:25:02.886216 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 05:25:02.886224 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:25:02.886235 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:25:02.886243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 05:25:02.886251 kernel: NET: Registered PF_XDP protocol family Jul 11 05:25:02.886382 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 11 05:25:02.886550 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 11 05:25:02.886663 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 05:25:02.886769 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 05:25:02.886876 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 05:25:02.887001 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 11 05:25:02.887114 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 11 05:25:02.887221 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 11 05:25:02.887231 kernel: PCI: CLS 0 bytes, default 64 Jul 11 05:25:02.887240 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 11 05:25:02.887248 kernel: Initialise system trusted keyrings Jul 11 05:25:02.887256 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 05:25:02.887264 kernel: Key type asymmetric registered Jul 11 05:25:02.887276 kernel: Asymmetric key parser 'x509' registered Jul 11 05:25:02.887284 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 05:25:02.887292 kernel: io scheduler mq-deadline registered Jul 11 05:25:02.887302 kernel: io scheduler kyber registered Jul 11 05:25:02.887310 kernel: io scheduler bfq registered Jul 11 05:25:02.887319 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 05:25:02.887330 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 05:25:02.887346 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 05:25:02.887354 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 05:25:02.887363 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 05:25:02.887372 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 05:25:02.887380 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 05:25:02.887389 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 05:25:02.887397 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 05:25:02.887529 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 05:25:02.887545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 05:25:02.887656 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 05:25:02.887766 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T05:25:02 UTC (1752211502) Jul 11 05:25:02.887876 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 11 05:25:02.887886 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 05:25:02.887895 kernel: efifb: probing for efifb Jul 11 05:25:02.887903 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 11 05:25:02.887911 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 11 05:25:02.887923 kernel: efifb: scrolling: redraw Jul 11 05:25:02.887931 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 11 05:25:02.887940 kernel: Console: switching to colour frame buffer device 160x50 Jul 11 05:25:02.887948 kernel: fb0: EFI VGA frame buffer device Jul 11 05:25:02.887956 kernel: pstore: Using crash dump compression: deflate Jul 11 05:25:02.887964 kernel: pstore: Registered efi_pstore as persistent store backend Jul 11 05:25:02.887973 kernel: NET: Registered PF_INET6 protocol family Jul 11 05:25:02.887994 kernel: Segment Routing with IPv6 Jul 11 05:25:02.888002 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 05:25:02.888014 kernel: NET: Registered PF_PACKET protocol family Jul 11 05:25:02.888022 kernel: Key type dns_resolver registered Jul 11 05:25:02.888030 kernel: IPI shorthand broadcast: enabled Jul 11 05:25:02.888038 kernel: sched_clock: Marking stable (3743002188, 153234805)->(3912209942, -15972949) Jul 11 05:25:02.888046 kernel: registered taskstats version 1 Jul 11 05:25:02.888054 kernel: Loading compiled-in X.509 certificates Jul 11 05:25:02.888062 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 9703a4b3d6547675037b9597aa24472a5380cc2e' Jul 11 05:25:02.888071 kernel: Demotion targets for Node 0: null Jul 11 05:25:02.888079 kernel: Key type .fscrypt registered Jul 11 05:25:02.888089 kernel: Key type fscrypt-provisioning registered Jul 11 05:25:02.888097 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 05:25:02.888105 kernel: ima: Allocated hash algorithm: sha1 Jul 11 05:25:02.888113 kernel: ima: No architecture policies found Jul 11 05:25:02.888121 kernel: clk: Disabling unused clocks Jul 11 05:25:02.888129 kernel: Warning: unable to open an initial console. Jul 11 05:25:02.888137 kernel: Freeing unused kernel image (initmem) memory: 54620K Jul 11 05:25:02.888145 kernel: Write protecting the kernel read-only data: 24576k Jul 11 05:25:02.888153 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 11 05:25:02.888164 kernel: Run /init as init process Jul 11 05:25:02.888172 kernel: with arguments: Jul 11 05:25:02.888180 kernel: /init Jul 11 05:25:02.888188 kernel: with environment: Jul 11 05:25:02.888195 kernel: HOME=/ Jul 11 05:25:02.888203 kernel: TERM=linux Jul 11 05:25:02.888211 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 05:25:02.888223 systemd[1]: Successfully made /usr/ read-only. Jul 11 05:25:02.888237 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:25:02.888246 systemd[1]: Detected virtualization kvm. Jul 11 05:25:02.888255 systemd[1]: Detected architecture x86-64. Jul 11 05:25:02.888263 systemd[1]: Running in initrd. Jul 11 05:25:02.888272 systemd[1]: No hostname configured, using default hostname. Jul 11 05:25:02.888281 systemd[1]: Hostname set to . Jul 11 05:25:02.888289 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:25:02.888298 systemd[1]: Queued start job for default target initrd.target. Jul 11 05:25:02.888309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:25:02.888318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:25:02.888327 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 05:25:02.888336 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:25:02.888353 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 05:25:02.888363 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 05:25:02.888375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 05:25:02.888384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 05:25:02.888393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:25:02.888401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:25:02.888410 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:25:02.888418 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:25:02.888427 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:25:02.888435 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:25:02.888444 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:25:02.888455 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:25:02.888463 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 05:25:02.888472 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 05:25:02.888481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:25:02.888489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:25:02.888498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:25:02.888507 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:25:02.888515 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 05:25:02.888526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:25:02.888534 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 05:25:02.888543 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 05:25:02.888552 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 05:25:02.888561 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:25:02.888570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:25:02.888578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:25:02.888589 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 05:25:02.888601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:25:02.888609 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 05:25:02.888618 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:25:02.888649 systemd-journald[220]: Collecting audit messages is disabled. Jul 11 05:25:02.888673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:25:02.888683 systemd-journald[220]: Journal started Jul 11 05:25:02.888702 systemd-journald[220]: Runtime Journal (/run/log/journal/b8a55951b1e64e76a7435e0ab1a0eba9) is 6M, max 48.5M, 42.4M free. Jul 11 05:25:02.878665 systemd-modules-load[221]: Inserted module 'overlay' Jul 11 05:25:02.892150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 05:25:02.893006 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:25:02.893785 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:25:02.905733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:25:02.907207 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 05:25:02.908614 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 11 05:25:02.911582 kernel: Bridge firewalling registered Jul 11 05:25:02.909275 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:25:02.909866 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:25:02.911543 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:25:02.923735 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 05:25:02.923776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:25:02.927552 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:25:02.928937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:25:02.929563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:25:02.934211 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 05:25:02.936114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:25:02.960235 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dfe1af008de84ad21c9c6e2b52b45ca0aecff9e5872ea6ea8c4ddf6ebe77d5c1 Jul 11 05:25:02.978387 systemd-resolved[263]: Positive Trust Anchors: Jul 11 05:25:02.978405 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:25:02.978434 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:25:02.980850 systemd-resolved[263]: Defaulting to hostname 'linux'. Jul 11 05:25:02.981901 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:25:02.987519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:25:03.069024 kernel: SCSI subsystem initialized Jul 11 05:25:03.079013 kernel: Loading iSCSI transport class v2.0-870. Jul 11 05:25:03.089014 kernel: iscsi: registered transport (tcp) Jul 11 05:25:03.109313 kernel: iscsi: registered transport (qla4xxx) Jul 11 05:25:03.109355 kernel: QLogic iSCSI HBA Driver Jul 11 05:25:03.129861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:25:03.147476 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:25:03.148131 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:25:03.206360 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 05:25:03.209813 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 05:25:03.265026 kernel: raid6: avx2x4 gen() 30416 MB/s Jul 11 05:25:03.282015 kernel: raid6: avx2x2 gen() 31284 MB/s Jul 11 05:25:03.299033 kernel: raid6: avx2x1 gen() 25716 MB/s Jul 11 05:25:03.299075 kernel: raid6: using algorithm avx2x2 gen() 31284 MB/s Jul 11 05:25:03.317066 kernel: raid6: .... xor() 19899 MB/s, rmw enabled Jul 11 05:25:03.317102 kernel: raid6: using avx2x2 recovery algorithm Jul 11 05:25:03.337015 kernel: xor: automatically using best checksumming function avx Jul 11 05:25:03.500027 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 05:25:03.508733 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:25:03.510793 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:25:03.537622 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 11 05:25:03.543220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:25:03.548526 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 05:25:03.580401 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jul 11 05:25:03.609336 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:25:03.613025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:25:03.783403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:25:03.787501 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 05:25:03.828057 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 05:25:03.828261 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 05:25:03.836370 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 05:25:03.836391 kernel: GPT:9289727 != 19775487 Jul 11 05:25:03.836402 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 05:25:03.838450 kernel: GPT:9289727 != 19775487 Jul 11 05:25:03.838493 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 05:25:03.838505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:25:03.843007 kernel: libata version 3.00 loaded. Jul 11 05:25:03.843125 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 05:25:03.854189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:25:03.856959 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 05:25:03.857151 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 05:25:03.854342 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:25:03.862760 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 11 05:25:03.862966 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 11 05:25:03.863512 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 05:25:03.865422 kernel: AES CTR mode by8 optimization enabled Jul 11 05:25:03.859909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:25:03.863338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:25:03.867611 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:25:03.875004 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 11 05:25:03.891004 kernel: scsi host0: ahci Jul 11 05:25:03.892998 kernel: scsi host1: ahci Jul 11 05:25:03.903005 kernel: scsi host2: ahci Jul 11 05:25:03.903265 kernel: scsi host3: ahci Jul 11 05:25:03.903446 kernel: scsi host4: ahci Jul 11 05:25:03.906350 kernel: scsi host5: ahci Jul 11 05:25:03.906541 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 11 05:25:03.906554 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 11 05:25:03.906945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:25:03.910924 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 11 05:25:03.910938 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 11 05:25:03.910948 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 11 05:25:03.910958 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 11 05:25:03.923126 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 05:25:03.939446 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 05:25:03.950391 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:25:03.959196 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 05:25:03.962324 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 05:25:03.965576 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 05:25:04.000656 disk-uuid[635]: Primary Header is updated. Jul 11 05:25:04.000656 disk-uuid[635]: Secondary Entries is updated. Jul 11 05:25:04.000656 disk-uuid[635]: Secondary Header is updated. Jul 11 05:25:04.005011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:25:04.009006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:25:04.219065 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 05:25:04.219120 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 05:25:04.219132 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 05:25:04.220014 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 05:25:04.221010 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 05:25:04.222017 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 05:25:04.222092 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 05:25:04.223012 kernel: ata3.00: applying bridge limits Jul 11 05:25:04.223026 kernel: ata3.00: configured for UDMA/100 Jul 11 05:25:04.224021 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 05:25:04.282011 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 05:25:04.282237 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 05:25:04.296090 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 05:25:04.686878 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 05:25:04.689467 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:25:04.691754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:25:04.694057 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:25:04.696824 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 05:25:04.723893 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:25:05.011014 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:25:05.011375 disk-uuid[636]: The operation has completed successfully. Jul 11 05:25:05.044291 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 05:25:05.044426 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 05:25:05.074934 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 05:25:05.099837 sh[664]: Success Jul 11 05:25:05.119578 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 05:25:05.119659 kernel: device-mapper: uevent: version 1.0.3 Jul 11 05:25:05.120866 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 05:25:05.132005 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 11 05:25:05.167303 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 05:25:05.170467 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 05:25:05.186221 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 05:25:05.193013 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 05:25:05.193048 kernel: BTRFS: device fsid 5947ac9d-360e-47c3-9a17-c6b228910c06 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (676) Jul 11 05:25:05.195806 kernel: BTRFS info (device dm-0): first mount of filesystem 5947ac9d-360e-47c3-9a17-c6b228910c06 Jul 11 05:25:05.195836 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:25:05.196641 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 05:25:05.202204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 05:25:05.203496 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:25:05.205330 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 05:25:05.206411 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 05:25:05.208234 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 05:25:05.241024 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 11 05:25:05.241113 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:25:05.242564 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:25:05.242588 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:25:05.251015 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:25:05.252812 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 05:25:05.257495 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 05:25:05.426553 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:25:05.430248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:25:05.440316 ignition[750]: Ignition 2.21.0 Jul 11 05:25:05.440328 ignition[750]: Stage: fetch-offline Jul 11 05:25:05.440365 ignition[750]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:25:05.440374 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:25:05.440464 ignition[750]: parsed url from cmdline: "" Jul 11 05:25:05.440468 ignition[750]: no config URL provided Jul 11 05:25:05.440473 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 05:25:05.440481 ignition[750]: no config at "/usr/lib/ignition/user.ign" Jul 11 05:25:05.440513 ignition[750]: op(1): [started] loading QEMU firmware config module Jul 11 05:25:05.440518 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 05:25:05.454075 ignition[750]: op(1): [finished] loading QEMU firmware config module Jul 11 05:25:05.454102 ignition[750]: QEMU firmware config was not found. Ignoring... Jul 11 05:25:05.485833 systemd-networkd[850]: lo: Link UP Jul 11 05:25:05.485850 systemd-networkd[850]: lo: Gained carrier Jul 11 05:25:05.487730 systemd-networkd[850]: Enumeration completed Jul 11 05:25:05.487926 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:25:05.488098 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:25:05.488103 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:25:05.489597 systemd-networkd[850]: eth0: Link UP Jul 11 05:25:05.489601 systemd-networkd[850]: eth0: Gained carrier Jul 11 05:25:05.489608 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:25:05.491711 systemd[1]: Reached target network.target - Network. Jul 11 05:25:05.508658 ignition[750]: parsing config with SHA512: f840a6a3a962dc1749a549457933e587731936d2ac908e8bfffbe97c7f8421f177b86c919b91a0335e71e3ebe96a6aed62c60733110c83e94d76e3ffdda39a5e Jul 11 05:25:05.513093 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:25:05.513547 unknown[750]: fetched base config from "system" Jul 11 05:25:05.513967 ignition[750]: fetch-offline: fetch-offline passed Jul 11 05:25:05.513558 unknown[750]: fetched user config from "qemu" Jul 11 05:25:05.514064 ignition[750]: Ignition finished successfully Jul 11 05:25:05.518172 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:25:05.519964 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 05:25:05.523215 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 05:25:05.571586 ignition[859]: Ignition 2.21.0 Jul 11 05:25:05.571599 ignition[859]: Stage: kargs Jul 11 05:25:05.571810 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:25:05.571821 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:25:05.573678 ignition[859]: kargs: kargs passed Jul 11 05:25:05.573731 ignition[859]: Ignition finished successfully Jul 11 05:25:05.580361 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 05:25:05.583506 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 05:25:05.621548 ignition[867]: Ignition 2.21.0 Jul 11 05:25:05.621566 ignition[867]: Stage: disks Jul 11 05:25:05.621733 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:25:05.621744 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:25:05.625081 ignition[867]: disks: disks passed Jul 11 05:25:05.625676 ignition[867]: Ignition finished successfully Jul 11 05:25:05.628539 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 05:25:05.629179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 05:25:05.629446 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 05:25:05.629753 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:25:05.630396 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:25:05.630699 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:25:05.632379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 05:25:05.663232 systemd-resolved[263]: Detected conflict on linux IN A 10.0.0.97 Jul 11 05:25:05.663254 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jul 11 05:25:05.666245 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 05:25:05.674625 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 05:25:05.678956 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 05:25:05.800023 kernel: EXT4-fs (vda9): mounted filesystem 68e263c6-913a-4fa8-894f-6e89b186e148 r/w with ordered data mode. Quota mode: none. Jul 11 05:25:05.800766 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 05:25:05.802180 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 05:25:05.804678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:25:05.806576 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 05:25:05.807743 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 05:25:05.807781 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 05:25:05.807804 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:25:05.821168 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 05:25:05.822863 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 05:25:05.830389 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Jul 11 05:25:05.832333 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:25:05.832347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:25:05.832357 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:25:05.836861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:25:05.936944 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 05:25:05.942770 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 11 05:25:05.948028 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 05:25:05.953843 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 05:25:06.045457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 05:25:06.047803 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 05:25:06.048969 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 05:25:06.073044 kernel: BTRFS info (device vda6): last unmount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:25:06.093158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 05:25:06.162100 ignition[999]: INFO : Ignition 2.21.0 Jul 11 05:25:06.162100 ignition[999]: INFO : Stage: mount Jul 11 05:25:06.163978 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:25:06.163978 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:25:06.163978 ignition[999]: INFO : mount: mount passed Jul 11 05:25:06.163978 ignition[999]: INFO : Ignition finished successfully Jul 11 05:25:06.170079 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 05:25:06.171542 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 05:25:06.193510 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 05:25:06.197209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:25:06.211483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Jul 11 05:25:06.211515 kernel: BTRFS info (device vda6): first mount of filesystem da2de3c6-95dc-4a43-9a95-74c8b7ce9719 Jul 11 05:25:06.211526 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 05:25:06.213097 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:25:06.216953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:25:06.254411 ignition[1028]: INFO : Ignition 2.21.0 Jul 11 05:25:06.254411 ignition[1028]: INFO : Stage: files Jul 11 05:25:06.254411 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:25:06.254411 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:25:06.258739 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jul 11 05:25:06.258739 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 05:25:06.258739 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 05:25:06.262974 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 05:25:06.262974 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 05:25:06.262974 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 05:25:06.262974 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 05:25:06.262974 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 11 05:25:06.259858 unknown[1028]: wrote ssh authorized keys file for user: core Jul 11 05:25:06.306008 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 05:25:06.507346 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 05:25:06.507346 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 05:25:06.511246 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 05:25:06.511246 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:25:06.511246 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:25:06.511246 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:25:06.511246 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:25:06.511246 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:25:06.511246 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:25:06.523342 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:25:06.523342 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:25:06.523342 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:25:06.523342 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:25:06.523342 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:25:06.523342 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 11 05:25:06.788214 systemd-networkd[850]: eth0: Gained IPv6LL Jul 11 05:25:06.841617 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 05:25:07.590600 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 05:25:07.590600 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 05:25:07.594679 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:25:07.600524 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:25:07.600524 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 05:25:07.600524 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 05:25:07.605681 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:25:07.605681 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:25:07.605681 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 05:25:07.605681 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 05:25:07.685332 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:25:07.691249 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:25:07.692902 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 05:25:07.692902 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 05:25:07.692902 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 05:25:07.692902 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:25:07.692902 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:25:07.692902 ignition[1028]: INFO : files: files passed Jul 11 05:25:07.692902 ignition[1028]: INFO : Ignition finished successfully Jul 11 05:25:07.694922 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 05:25:07.698410 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 05:25:07.700495 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 05:25:07.713524 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 05:25:07.713666 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 05:25:07.716803 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 05:25:07.720278 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:25:07.720278 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:25:07.723407 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:25:07.722738 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:25:07.724278 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 05:25:07.728048 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 05:25:07.795678 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 05:25:07.795828 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 05:25:07.798085 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 05:25:07.799950 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 05:25:07.801901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 05:25:07.802859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 05:25:07.826765 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:25:07.830585 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 05:25:07.858855 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:25:07.860205 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:25:07.862280 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 05:25:07.864219 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 05:25:07.864341 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:25:07.866322 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 05:25:07.867897 systemd[1]: Stopped target basic.target - Basic System. Jul 11 05:25:07.869780 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 05:25:07.871668 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:25:07.873531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 05:25:07.875580 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:25:07.877655 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 05:25:07.879584 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:25:07.881712 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 05:25:07.883564 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 05:25:07.885597 systemd[1]: Stopped target swap.target - Swaps. Jul 11 05:25:07.887248 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 05:25:07.887348 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:25:07.889346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:25:07.890839 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:25:07.892770 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 05:25:07.892853 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:25:07.894842 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 05:25:07.894942 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 05:25:07.897000 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 05:25:07.897102 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:25:07.898969 systemd[1]: Stopped target paths.target - Path Units. Jul 11 05:25:07.900574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 05:25:07.900716 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:25:07.902996 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 05:25:07.904697 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 05:25:07.906499 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 05:25:07.906588 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:25:07.908356 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 05:25:07.908438 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:25:07.910337 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 05:25:07.910449 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:25:07.912229 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 05:25:07.912330 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 05:25:07.914874 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 05:25:07.916850 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 05:25:07.917425 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 05:25:07.917578 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:25:07.918454 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 05:25:07.918591 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:25:07.922684 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 05:25:07.929139 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 05:25:07.946670 ignition[1083]: INFO : Ignition 2.21.0 Jul 11 05:25:07.946670 ignition[1083]: INFO : Stage: umount Jul 11 05:25:07.948441 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:25:07.948441 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:25:07.952069 ignition[1083]: INFO : umount: umount passed Jul 11 05:25:07.952821 ignition[1083]: INFO : Ignition finished successfully Jul 11 05:25:07.955427 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 05:25:07.956044 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 05:25:07.956157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 05:25:07.957285 systemd[1]: Stopped target network.target - Network. Jul 11 05:25:07.958542 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 05:25:07.958609 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 05:25:07.960559 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 05:25:07.960605 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 05:25:07.962566 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 05:25:07.962617 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 05:25:07.963297 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 05:25:07.963337 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 05:25:07.963705 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 05:25:07.963949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 05:25:07.971157 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 05:25:07.971323 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 05:25:07.975622 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 05:25:07.975931 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 05:25:07.976004 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:25:07.979159 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:25:07.986846 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 05:25:07.986966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 05:25:07.991580 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 05:25:07.991725 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 05:25:07.993277 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 05:25:07.993319 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:25:07.996783 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 05:25:07.998461 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 05:25:07.998523 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:25:07.999371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 05:25:07.999419 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:25:08.003345 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 05:25:08.003401 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 05:25:08.003894 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:25:08.007905 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 05:25:08.021595 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 05:25:08.030111 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:25:08.030629 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 05:25:08.030670 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 05:25:08.033010 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 05:25:08.033050 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:25:08.036539 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 05:25:08.036591 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:25:08.037780 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 05:25:08.037832 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 05:25:08.038554 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 05:25:08.038604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:25:08.044932 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 05:25:08.045359 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 05:25:08.045413 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:25:08.049940 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 05:25:08.050012 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:25:08.053235 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 05:25:08.053290 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:25:08.056448 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 05:25:08.056504 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:25:08.056967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:25:08.057032 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:25:08.061298 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 05:25:08.066235 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 05:25:08.074081 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 05:25:08.074222 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 05:25:08.166654 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 05:25:08.166784 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 05:25:08.168670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 05:25:08.170251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 05:25:08.170305 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 05:25:08.172846 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 05:25:08.191715 systemd[1]: Switching root. Jul 11 05:25:08.237402 systemd-journald[220]: Journal stopped Jul 11 05:25:09.484665 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 11 05:25:09.484735 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 05:25:09.484754 kernel: SELinux: policy capability open_perms=1 Jul 11 05:25:09.484765 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 05:25:09.484776 kernel: SELinux: policy capability always_check_network=0 Jul 11 05:25:09.484792 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 05:25:09.484804 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 05:25:09.484819 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 05:25:09.484830 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 05:25:09.484848 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 05:25:09.484859 kernel: audit: type=1403 audit(1752211508.705:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 05:25:09.484872 systemd[1]: Successfully loaded SELinux policy in 62.129ms. Jul 11 05:25:09.484886 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.041ms. Jul 11 05:25:09.484899 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:25:09.484912 systemd[1]: Detected virtualization kvm. Jul 11 05:25:09.484924 systemd[1]: Detected architecture x86-64. Jul 11 05:25:09.484935 systemd[1]: Detected first boot. Jul 11 05:25:09.484951 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:25:09.484963 kernel: Guest personality initialized and is inactive Jul 11 05:25:09.484974 zram_generator::config[1127]: No configuration found. Jul 11 05:25:09.485000 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 11 05:25:09.485011 kernel: Initialized host personality Jul 11 05:25:09.485030 kernel: NET: Registered PF_VSOCK protocol family Jul 11 05:25:09.485042 systemd[1]: Populated /etc with preset unit settings. Jul 11 05:25:09.485055 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 05:25:09.485072 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 05:25:09.485087 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 05:25:09.485099 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 05:25:09.485111 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 05:25:09.485123 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 05:25:09.485135 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 05:25:09.485147 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 05:25:09.485172 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 05:25:09.485185 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 05:25:09.485199 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 05:25:09.485211 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 05:25:09.485224 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:25:09.485236 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:25:09.485250 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 05:25:09.485262 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 05:25:09.485275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 05:25:09.485287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:25:09.485305 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 05:25:09.485317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:25:09.485329 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:25:09.485340 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 05:25:09.485352 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 05:25:09.485364 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 05:25:09.485376 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 05:25:09.485388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:25:09.485400 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:25:09.485414 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:25:09.485426 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:25:09.485437 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 05:25:09.485450 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 05:25:09.485462 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 05:25:09.485474 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:25:09.485486 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:25:09.485498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:25:09.485511 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 05:25:09.485525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 05:25:09.485537 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 05:25:09.485549 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 05:25:09.485561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:09.485573 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 05:25:09.485585 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 05:25:09.485597 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 05:25:09.485610 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 05:25:09.485622 systemd[1]: Reached target machines.target - Containers. Jul 11 05:25:09.485636 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 05:25:09.485648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:25:09.485660 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:25:09.485672 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 05:25:09.485684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:25:09.485696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:25:09.485708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:25:09.485721 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 05:25:09.485735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:25:09.485747 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 05:25:09.485759 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 05:25:09.485771 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 05:25:09.485783 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 05:25:09.485795 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 05:25:09.485807 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:25:09.485819 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:25:09.485831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:25:09.485845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:25:09.485857 kernel: loop: module loaded Jul 11 05:25:09.485868 kernel: ACPI: bus type drm_connector registered Jul 11 05:25:09.485879 kernel: fuse: init (API version 7.41) Jul 11 05:25:09.485891 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 05:25:09.485903 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 05:25:09.485936 systemd-journald[1202]: Collecting audit messages is disabled. Jul 11 05:25:09.485959 systemd-journald[1202]: Journal started Jul 11 05:25:09.485994 systemd-journald[1202]: Runtime Journal (/run/log/journal/b8a55951b1e64e76a7435e0ab1a0eba9) is 6M, max 48.5M, 42.4M free. Jul 11 05:25:09.234544 systemd[1]: Queued start job for default target multi-user.target. Jul 11 05:25:09.254568 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 05:25:09.255146 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 05:25:09.527138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:25:09.527229 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 05:25:09.529189 systemd[1]: Stopped verity-setup.service. Jul 11 05:25:09.532761 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:09.537417 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:25:09.538310 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 05:25:09.539438 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 05:25:09.540602 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 05:25:09.541685 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 05:25:09.542913 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 05:25:09.544283 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 05:25:09.545557 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 05:25:09.547043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:25:09.548688 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 05:25:09.548905 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 05:25:09.550366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:25:09.550571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:25:09.552175 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:25:09.552399 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:25:09.553808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:25:09.554045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:25:09.555556 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 05:25:09.555760 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 05:25:09.557109 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:25:09.557338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:25:09.558709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:25:09.560290 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:25:09.561905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 05:25:09.563424 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 05:25:09.657238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:25:09.663528 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:25:09.666352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 05:25:09.668537 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 05:25:09.669729 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 05:25:09.669762 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:25:09.671801 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 05:25:09.680909 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 05:25:09.682299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:25:09.683834 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 05:25:09.685909 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 05:25:09.687093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:25:09.688305 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 05:25:09.689646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:25:09.691242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:25:09.698094 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 05:25:09.701046 systemd-journald[1202]: Time spent on flushing to /var/log/journal/b8a55951b1e64e76a7435e0ab1a0eba9 is 16.588ms for 1064 entries. Jul 11 05:25:09.701046 systemd-journald[1202]: System Journal (/var/log/journal/b8a55951b1e64e76a7435e0ab1a0eba9) is 8M, max 195.6M, 187.6M free. Jul 11 05:25:09.742345 systemd-journald[1202]: Received client request to flush runtime journal. Jul 11 05:25:09.742428 kernel: loop0: detected capacity change from 0 to 224512 Jul 11 05:25:09.710033 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:25:09.713053 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 05:25:09.714332 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 05:25:09.715861 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 05:25:09.725177 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 05:25:09.733264 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 05:25:09.746597 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 05:25:09.753041 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 11 05:25:09.753078 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 11 05:25:09.760588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:25:09.762445 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:25:09.766007 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 05:25:09.768599 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 05:25:09.775444 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 05:25:09.789023 kernel: loop1: detected capacity change from 0 to 114000 Jul 11 05:25:09.813077 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 05:25:09.816048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:25:09.820005 kernel: loop2: detected capacity change from 0 to 146488 Jul 11 05:25:09.843594 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 11 05:25:09.843938 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 11 05:25:09.848731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:25:09.858420 kernel: loop3: detected capacity change from 0 to 224512 Jul 11 05:25:09.871024 kernel: loop4: detected capacity change from 0 to 114000 Jul 11 05:25:09.879003 kernel: loop5: detected capacity change from 0 to 146488 Jul 11 05:25:09.886703 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 05:25:09.887579 (sd-merge)[1273]: Merged extensions into '/usr'. Jul 11 05:25:09.963375 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 05:25:09.963394 systemd[1]: Reloading... Jul 11 05:25:10.043010 zram_generator::config[1305]: No configuration found. Jul 11 05:25:10.214407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:25:10.309799 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 05:25:10.309963 systemd[1]: Reloading finished in 346 ms. Jul 11 05:25:10.351958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 05:25:10.376526 systemd[1]: Starting ensure-sysext.service... Jul 11 05:25:10.378004 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 05:25:10.378475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:25:10.386487 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 05:25:10.394724 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jul 11 05:25:10.394861 systemd[1]: Reloading... Jul 11 05:25:10.401662 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 05:25:10.401704 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 05:25:10.402032 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 05:25:10.402313 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 05:25:10.403205 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 05:25:10.403582 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 11 05:25:10.403657 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 11 05:25:10.409571 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:25:10.409584 systemd-tmpfiles[1336]: Skipping /boot Jul 11 05:25:10.422584 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:25:10.422690 systemd-tmpfiles[1336]: Skipping /boot Jul 11 05:25:10.519389 zram_generator::config[1367]: No configuration found. Jul 11 05:25:10.687077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:25:10.771432 systemd[1]: Reloading finished in 376 ms. Jul 11 05:25:10.808503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:25:10.828066 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:25:10.857402 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 05:25:10.861835 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 05:25:10.871181 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:25:10.874692 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 05:25:10.876943 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 05:25:10.886690 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:10.887044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:25:10.888703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:25:10.891215 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:25:10.895253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:25:10.896422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:25:10.896534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:25:10.904807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:25:10.908902 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 05:25:10.909995 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:10.910470 augenrules[1432]: No rules Jul 11 05:25:10.911496 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 05:25:10.913595 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:25:10.913862 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:25:10.915705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:25:10.916139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:25:10.918163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:25:10.918757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:25:10.920896 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:25:10.921618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:25:10.936510 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:10.936708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:25:10.938535 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:25:10.941043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:25:10.953361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:25:10.954775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:25:10.955068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:25:10.960258 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 05:25:10.961437 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:10.962857 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 05:25:10.964689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:25:10.967071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:25:10.968683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:25:10.968896 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:25:10.974345 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 05:25:10.977371 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:25:10.977600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:25:10.987925 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 05:25:10.990874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:10.993122 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:25:10.995247 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:25:10.996574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:25:10.999566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:25:11.007081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:25:11.011336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:25:11.012457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:25:11.012582 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:25:11.012718 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 05:25:11.012804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 05:25:11.014390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:25:11.014619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:25:11.016543 systemd-udevd[1429]: Using default interface naming scheme 'v255'. Jul 11 05:25:11.017551 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:25:11.017794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:25:11.020400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:25:11.020619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:25:11.025947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:25:11.026270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:25:11.028071 systemd[1]: Finished ensure-sysext.service. Jul 11 05:25:11.034386 augenrules[1452]: /sbin/augenrules: No change Jul 11 05:25:11.034523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:25:11.034588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:25:11.037093 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 05:25:11.039172 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 05:25:11.044399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:25:11.047713 augenrules[1483]: No rules Jul 11 05:25:11.057840 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:25:11.060458 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:25:11.060715 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:25:11.160249 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 05:25:11.299009 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 05:25:11.307041 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 05:25:11.307881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:25:11.311761 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 05:25:11.312006 kernel: ACPI: button: Power Button [PWRF] Jul 11 05:25:11.327642 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 11 05:25:11.327957 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 05:25:11.330013 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 05:25:11.353033 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 05:25:11.458244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:25:11.470559 kernel: kvm_amd: TSC scaling supported Jul 11 05:25:11.470643 kernel: kvm_amd: Nested Virtualization enabled Jul 11 05:25:11.470657 kernel: kvm_amd: Nested Paging enabled Jul 11 05:25:11.470684 kernel: kvm_amd: LBR virtualization supported Jul 11 05:25:11.471963 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 05:25:11.472008 kernel: kvm_amd: Virtual GIF supported Jul 11 05:25:11.485336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:25:11.485597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:25:11.490977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:25:11.492687 systemd-resolved[1412]: Positive Trust Anchors: Jul 11 05:25:11.492709 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:25:11.492739 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:25:11.506703 systemd-resolved[1412]: Defaulting to hostname 'linux'. Jul 11 05:25:11.508291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:25:11.509586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:25:11.511965 systemd-networkd[1492]: lo: Link UP Jul 11 05:25:11.511976 systemd-networkd[1492]: lo: Gained carrier Jul 11 05:25:11.513785 systemd-networkd[1492]: Enumeration completed Jul 11 05:25:11.513888 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:25:11.514641 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:25:11.514654 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:25:11.515186 systemd[1]: Reached target network.target - Network. Jul 11 05:25:11.515727 systemd-networkd[1492]: eth0: Link UP Jul 11 05:25:11.516071 systemd-networkd[1492]: eth0: Gained carrier Jul 11 05:25:11.516116 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:25:11.517465 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 05:25:11.521146 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 05:25:11.529056 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:25:11.550016 kernel: EDAC MC: Ver: 3.0.0 Jul 11 05:25:11.571513 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 05:25:11.602931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:25:11.613053 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 05:25:12.358659 systemd-resolved[1412]: Clock change detected. Flushing caches. Jul 11 05:25:12.358674 systemd-timesyncd[1476]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 05:25:12.359037 systemd-timesyncd[1476]: Initial clock synchronization to Fri 2025-07-11 05:25:12.358588 UTC. Jul 11 05:25:12.359528 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:25:12.360720 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 05:25:12.361971 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 05:25:12.363179 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 11 05:25:12.364281 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 05:25:12.365468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 05:25:12.365501 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:25:12.366378 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 05:25:12.367530 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 05:25:12.368660 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 05:25:12.369887 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:25:12.371746 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 05:25:12.374589 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 05:25:12.377624 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 05:25:12.378989 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 05:25:12.380205 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 05:25:12.384123 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 05:25:12.385466 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 05:25:12.387350 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 05:25:12.389133 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:25:12.390055 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:25:12.390978 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:25:12.391006 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:25:12.392041 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 05:25:12.394825 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 05:25:12.397899 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 05:25:12.400013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 05:25:12.402160 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 05:25:12.403155 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 05:25:12.404276 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 11 05:25:12.407827 jq[1566]: false Jul 11 05:25:12.407835 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 05:25:12.409230 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 05:25:12.411859 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 05:25:12.414057 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 05:25:12.421834 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 05:25:12.425118 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 05:25:12.425925 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 05:25:12.426954 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 05:25:12.432175 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 05:25:12.440759 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jul 11 05:25:12.439058 oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jul 11 05:25:12.440987 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 05:25:12.443057 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 05:25:12.443440 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 05:25:12.443945 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 05:25:12.444229 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 05:25:12.445105 jq[1580]: true Jul 11 05:25:12.450871 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 05:25:12.451038 extend-filesystems[1567]: Found /dev/vda6 Jul 11 05:25:12.451188 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 05:25:12.459116 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting users, quitting Jul 11 05:25:12.459207 oslogin_cache_refresh[1568]: Failure getting users, quitting Jul 11 05:25:12.462827 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 05:25:12.462827 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing group entry cache Jul 11 05:25:12.461402 oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 11 05:25:12.461461 oslogin_cache_refresh[1568]: Refreshing group entry cache Jul 11 05:25:12.465068 update_engine[1579]: I20250711 05:25:12.464985 1579 main.cc:92] Flatcar Update Engine starting Jul 11 05:25:12.468117 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting groups, quitting Jul 11 05:25:12.468117 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 05:25:12.468107 oslogin_cache_refresh[1568]: Failure getting groups, quitting Jul 11 05:25:12.468117 oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 11 05:25:12.468495 extend-filesystems[1567]: Found /dev/vda9 Jul 11 05:25:12.469761 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 05:25:12.470660 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 11 05:25:12.471822 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 11 05:25:12.473917 extend-filesystems[1567]: Checking size of /dev/vda9 Jul 11 05:25:12.477138 tar[1588]: linux-amd64/LICENSE Jul 11 05:25:12.477952 tar[1588]: linux-amd64/helm Jul 11 05:25:12.478072 jq[1590]: true Jul 11 05:25:12.505076 extend-filesystems[1567]: Resized partition /dev/vda9 Jul 11 05:25:12.514884 extend-filesystems[1609]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 05:25:12.519069 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 05:25:12.519627 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 05:25:12.520926 dbus-daemon[1564]: [system] SELinux support is enabled Jul 11 05:25:12.521100 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 05:25:12.525247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 05:25:12.525272 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 05:25:12.526848 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 05:25:12.526872 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 05:25:12.544809 systemd[1]: Started update-engine.service - Update Engine. Jul 11 05:25:12.547774 update_engine[1579]: I20250711 05:25:12.545815 1579 update_check_scheduler.cc:74] Next update check in 8m47s Jul 11 05:25:12.549523 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 05:25:12.558639 systemd-logind[1578]: Watching system buttons on /dev/input/event2 (Power Button) Jul 11 05:25:12.558664 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 05:25:12.560580 systemd-logind[1578]: New seat seat0. Jul 11 05:25:12.570769 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 05:25:12.572753 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 05:25:12.584166 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 05:25:12.598778 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 05:25:12.598778 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 05:25:12.598778 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 05:25:12.602378 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Jul 11 05:25:12.608769 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Jul 11 05:25:12.607457 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 05:25:12.612374 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 05:25:12.612948 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 05:25:12.617119 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 05:25:12.620430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 05:25:12.625564 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 05:25:12.625822 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 05:25:12.630059 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 05:25:12.640288 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 05:25:12.650085 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 05:25:12.653032 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 05:25:12.655958 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 05:25:12.657146 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 05:25:12.679435 containerd[1591]: time="2025-07-11T05:25:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 05:25:12.681131 containerd[1591]: time="2025-07-11T05:25:12.681074527Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 11 05:25:12.689316 containerd[1591]: time="2025-07-11T05:25:12.689252781Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.468µs" Jul 11 05:25:12.689316 containerd[1591]: time="2025-07-11T05:25:12.689288688Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 05:25:12.689316 containerd[1591]: time="2025-07-11T05:25:12.689307965Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 05:25:12.689523 containerd[1591]: time="2025-07-11T05:25:12.689494715Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 05:25:12.689523 containerd[1591]: time="2025-07-11T05:25:12.689514091Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 05:25:12.689581 containerd[1591]: time="2025-07-11T05:25:12.689537575Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:25:12.689621 containerd[1591]: time="2025-07-11T05:25:12.689600794Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:25:12.689621 containerd[1591]: time="2025-07-11T05:25:12.689613107Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:25:12.689964 containerd[1591]: time="2025-07-11T05:25:12.689924711Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:25:12.689964 containerd[1591]: time="2025-07-11T05:25:12.689945941Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:25:12.689964 containerd[1591]: time="2025-07-11T05:25:12.689956210Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:25:12.689964 containerd[1591]: time="2025-07-11T05:25:12.689964556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 05:25:12.690078 containerd[1591]: time="2025-07-11T05:25:12.690060866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 05:25:12.690308 containerd[1591]: time="2025-07-11T05:25:12.690277553Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:25:12.690336 containerd[1591]: time="2025-07-11T05:25:12.690309773Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:25:12.690336 containerd[1591]: time="2025-07-11T05:25:12.690319421Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 05:25:12.690374 containerd[1591]: time="2025-07-11T05:25:12.690348455Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 05:25:12.690586 containerd[1591]: time="2025-07-11T05:25:12.690563869Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 05:25:12.690652 containerd[1591]: time="2025-07-11T05:25:12.690626898Z" level=info msg="metadata content store policy set" policy=shared Jul 11 05:25:12.697304 containerd[1591]: time="2025-07-11T05:25:12.697268200Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 05:25:12.697304 containerd[1591]: time="2025-07-11T05:25:12.697305430Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 05:25:12.697373 containerd[1591]: time="2025-07-11T05:25:12.697329926Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 05:25:12.697373 containerd[1591]: time="2025-07-11T05:25:12.697342279Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 05:25:12.697373 containerd[1591]: time="2025-07-11T05:25:12.697353560Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 05:25:12.697373 containerd[1591]: time="2025-07-11T05:25:12.697362677Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 05:25:12.697449 containerd[1591]: time="2025-07-11T05:25:12.697378387Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 05:25:12.697449 containerd[1591]: time="2025-07-11T05:25:12.697394537Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 05:25:12.697449 containerd[1591]: time="2025-07-11T05:25:12.697405297Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 05:25:12.697449 containerd[1591]: time="2025-07-11T05:25:12.697415656Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 05:25:12.697449 containerd[1591]: time="2025-07-11T05:25:12.697426236Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 05:25:12.697449 containerd[1591]: time="2025-07-11T05:25:12.697438930Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 05:25:12.697556 containerd[1591]: time="2025-07-11T05:25:12.697545199Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 05:25:12.697577 containerd[1591]: time="2025-07-11T05:25:12.697563634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 05:25:12.697598 containerd[1591]: time="2025-07-11T05:25:12.697578321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 05:25:12.697598 containerd[1591]: time="2025-07-11T05:25:12.697589022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 05:25:12.697634 containerd[1591]: time="2025-07-11T05:25:12.697598650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 05:25:12.697634 containerd[1591]: time="2025-07-11T05:25:12.697609540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 05:25:12.697634 containerd[1591]: time="2025-07-11T05:25:12.697620280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 05:25:12.697634 containerd[1591]: time="2025-07-11T05:25:12.697634016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 05:25:12.697714 containerd[1591]: time="2025-07-11T05:25:12.697644986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 05:25:12.697714 containerd[1591]: time="2025-07-11T05:25:12.697656358Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 05:25:12.697714 containerd[1591]: time="2025-07-11T05:25:12.697666998Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 05:25:12.697803 containerd[1591]: time="2025-07-11T05:25:12.697723444Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 05:25:12.697803 containerd[1591]: time="2025-07-11T05:25:12.697766033Z" level=info msg="Start snapshots syncer" Jul 11 05:25:12.697803 containerd[1591]: time="2025-07-11T05:25:12.697796100Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 05:25:12.698075 containerd[1591]: time="2025-07-11T05:25:12.698022274Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 05:25:12.698075 containerd[1591]: time="2025-07-11T05:25:12.698075544Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 05:25:12.698213 containerd[1591]: time="2025-07-11T05:25:12.698133472Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 05:25:12.698234 containerd[1591]: time="2025-07-11T05:25:12.698222319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 05:25:12.698254 containerd[1591]: time="2025-07-11T05:25:12.698239371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 05:25:12.698275 containerd[1591]: time="2025-07-11T05:25:12.698258567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 05:25:12.698275 containerd[1591]: time="2025-07-11T05:25:12.698270449Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 05:25:12.698318 containerd[1591]: time="2025-07-11T05:25:12.698289565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 05:25:12.698318 containerd[1591]: time="2025-07-11T05:25:12.698304583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 05:25:12.698318 containerd[1591]: time="2025-07-11T05:25:12.698315093Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 05:25:12.698376 containerd[1591]: time="2025-07-11T05:25:12.698335661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 05:25:12.698376 containerd[1591]: time="2025-07-11T05:25:12.698346191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 05:25:12.698376 containerd[1591]: time="2025-07-11T05:25:12.698356601Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 05:25:12.698434 containerd[1591]: time="2025-07-11T05:25:12.698391616Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:25:12.698434 containerd[1591]: time="2025-07-11T05:25:12.698404230Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:25:12.698434 containerd[1591]: time="2025-07-11T05:25:12.698412696Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:25:12.698434 containerd[1591]: time="2025-07-11T05:25:12.698421803Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:25:12.698434 containerd[1591]: time="2025-07-11T05:25:12.698429227Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 05:25:12.698577 containerd[1591]: time="2025-07-11T05:25:12.698437973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 05:25:12.698577 containerd[1591]: time="2025-07-11T05:25:12.698448583Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 05:25:12.698577 containerd[1591]: time="2025-07-11T05:25:12.698464263Z" level=info msg="runtime interface created" Jul 11 05:25:12.698577 containerd[1591]: time="2025-07-11T05:25:12.698469783Z" level=info msg="created NRI interface" Jul 11 05:25:12.698577 containerd[1591]: time="2025-07-11T05:25:12.698477287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 05:25:12.698577 containerd[1591]: time="2025-07-11T05:25:12.698490061Z" level=info msg="Connect containerd service" Jul 11 05:25:12.698577 containerd[1591]: time="2025-07-11T05:25:12.698510820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 05:25:12.699581 containerd[1591]: time="2025-07-11T05:25:12.699544218Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 05:25:12.789423 containerd[1591]: time="2025-07-11T05:25:12.789357968Z" level=info msg="Start subscribing containerd event" Jul 11 05:25:12.789548 containerd[1591]: time="2025-07-11T05:25:12.789464057Z" level=info msg="Start recovering state" Jul 11 05:25:12.789700 containerd[1591]: time="2025-07-11T05:25:12.789677668Z" level=info msg="Start event monitor" Jul 11 05:25:12.789760 containerd[1591]: time="2025-07-11T05:25:12.789711441Z" level=info msg="Start cni network conf syncer for default" Jul 11 05:25:12.789760 containerd[1591]: time="2025-07-11T05:25:12.789719917Z" level=info msg="Start streaming server" Jul 11 05:25:12.789813 containerd[1591]: time="2025-07-11T05:25:12.789789738Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 05:25:12.789813 containerd[1591]: time="2025-07-11T05:25:12.789799065Z" level=info msg="runtime interface starting up..." Jul 11 05:25:12.789813 containerd[1591]: time="2025-07-11T05:25:12.789805267Z" level=info msg="starting plugins..." Jul 11 05:25:12.790192 containerd[1591]: time="2025-07-11T05:25:12.789943366Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 05:25:12.790192 containerd[1591]: time="2025-07-11T05:25:12.790062850Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 05:25:12.790192 containerd[1591]: time="2025-07-11T05:25:12.790142750Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 05:25:12.791697 containerd[1591]: time="2025-07-11T05:25:12.790936268Z" level=info msg="containerd successfully booted in 0.112223s" Jul 11 05:25:12.791071 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 05:25:12.837366 tar[1588]: linux-amd64/README.md Jul 11 05:25:12.868473 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 05:25:14.125014 systemd-networkd[1492]: eth0: Gained IPv6LL Jul 11 05:25:14.128986 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 05:25:14.131308 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 05:25:14.134550 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 05:25:14.137451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:14.140094 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 05:25:14.170936 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 05:25:14.179879 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 05:25:14.180167 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 05:25:14.182059 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 05:25:15.451595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:15.453626 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 05:25:15.455121 systemd[1]: Startup finished in 3.797s (kernel) + 6.073s (initrd) + 6.064s (userspace) = 15.934s. Jul 11 05:25:15.466383 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:25:16.341646 kubelet[1700]: E0711 05:25:16.341545 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:25:16.345400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:25:16.345586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:25:16.346019 systemd[1]: kubelet.service: Consumed 2.015s CPU time, 265.7M memory peak. Jul 11 05:25:16.514774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 05:25:16.516035 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:40064.service - OpenSSH per-connection server daemon (10.0.0.1:40064). Jul 11 05:25:16.700713 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 40064 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:25:16.702976 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:16.717258 systemd-logind[1578]: New session 1 of user core. Jul 11 05:25:16.718833 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 05:25:16.720232 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 05:25:16.753603 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 05:25:16.756048 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 05:25:16.787081 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 05:25:16.790198 systemd-logind[1578]: New session c1 of user core. Jul 11 05:25:16.956614 systemd[1718]: Queued start job for default target default.target. Jul 11 05:25:16.977493 systemd[1718]: Created slice app.slice - User Application Slice. Jul 11 05:25:16.977523 systemd[1718]: Reached target paths.target - Paths. Jul 11 05:25:16.977571 systemd[1718]: Reached target timers.target - Timers. Jul 11 05:25:16.979544 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 05:25:17.022504 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 05:25:17.022676 systemd[1718]: Reached target sockets.target - Sockets. Jul 11 05:25:17.022757 systemd[1718]: Reached target basic.target - Basic System. Jul 11 05:25:17.022820 systemd[1718]: Reached target default.target - Main User Target. Jul 11 05:25:17.022862 systemd[1718]: Startup finished in 223ms. Jul 11 05:25:17.023143 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 05:25:17.024857 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 05:25:17.095225 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:40076.service - OpenSSH per-connection server daemon (10.0.0.1:40076). Jul 11 05:25:17.144159 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 40076 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:25:17.146099 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:17.150737 systemd-logind[1578]: New session 2 of user core. Jul 11 05:25:17.160853 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 05:25:17.215572 sshd[1732]: Connection closed by 10.0.0.1 port 40076 Jul 11 05:25:17.215883 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:17.228939 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:40076.service: Deactivated successfully. Jul 11 05:25:17.230847 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 05:25:17.231674 systemd-logind[1578]: Session 2 logged out. Waiting for processes to exit. Jul 11 05:25:17.234129 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:40080.service - OpenSSH per-connection server daemon (10.0.0.1:40080). Jul 11 05:25:17.234801 systemd-logind[1578]: Removed session 2. Jul 11 05:25:17.290920 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 40080 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:25:17.292758 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:17.297549 systemd-logind[1578]: New session 3 of user core. Jul 11 05:25:17.306917 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 05:25:17.358411 sshd[1741]: Connection closed by 10.0.0.1 port 40080 Jul 11 05:25:17.358898 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:17.375876 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:40080.service: Deactivated successfully. Jul 11 05:25:17.378063 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 05:25:17.378803 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Jul 11 05:25:17.381689 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:40090.service - OpenSSH per-connection server daemon (10.0.0.1:40090). Jul 11 05:25:17.382303 systemd-logind[1578]: Removed session 3. Jul 11 05:25:17.442420 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 40090 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:25:17.444311 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:17.449547 systemd-logind[1578]: New session 4 of user core. Jul 11 05:25:17.463009 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 05:25:17.518597 sshd[1750]: Connection closed by 10.0.0.1 port 40090 Jul 11 05:25:17.519156 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:17.528575 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:40090.service: Deactivated successfully. Jul 11 05:25:17.530510 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 05:25:17.531270 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Jul 11 05:25:17.533095 systemd-logind[1578]: Removed session 4. Jul 11 05:25:17.534226 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:40092.service - OpenSSH per-connection server daemon (10.0.0.1:40092). Jul 11 05:25:17.600219 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 40092 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:25:17.602254 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:17.607342 systemd-logind[1578]: New session 5 of user core. Jul 11 05:25:17.621942 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 05:25:17.680298 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 05:25:17.680613 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:25:17.693228 sudo[1760]: pam_unix(sudo:session): session closed for user root Jul 11 05:25:17.695248 sshd[1759]: Connection closed by 10.0.0.1 port 40092 Jul 11 05:25:17.695823 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:17.705320 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:40092.service: Deactivated successfully. Jul 11 05:25:17.707010 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 05:25:17.707719 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Jul 11 05:25:17.710457 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:40106.service - OpenSSH per-connection server daemon (10.0.0.1:40106). Jul 11 05:25:17.711086 systemd-logind[1578]: Removed session 5. Jul 11 05:25:17.769435 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 40106 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:25:17.771230 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:17.776515 systemd-logind[1578]: New session 6 of user core. Jul 11 05:25:17.791023 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 05:25:17.846041 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 05:25:17.846414 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:25:18.127363 sudo[1771]: pam_unix(sudo:session): session closed for user root Jul 11 05:25:18.135356 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 05:25:18.135747 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:25:18.148074 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:25:18.203813 augenrules[1793]: No rules Jul 11 05:25:18.205798 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:25:18.206126 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:25:18.207385 sudo[1770]: pam_unix(sudo:session): session closed for user root Jul 11 05:25:18.209068 sshd[1769]: Connection closed by 10.0.0.1 port 40106 Jul 11 05:25:18.209369 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:18.226516 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:40106.service: Deactivated successfully. Jul 11 05:25:18.228677 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 05:25:18.229404 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Jul 11 05:25:18.232262 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:55752.service - OpenSSH per-connection server daemon (10.0.0.1:55752). Jul 11 05:25:18.232851 systemd-logind[1578]: Removed session 6. Jul 11 05:25:18.290880 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 55752 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:25:18.292650 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:25:18.297266 systemd-logind[1578]: New session 7 of user core. Jul 11 05:25:18.307013 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 05:25:18.362361 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 05:25:18.362820 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:25:19.036804 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 05:25:19.050024 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 05:25:19.612655 dockerd[1827]: time="2025-07-11T05:25:19.612531597Z" level=info msg="Starting up" Jul 11 05:25:19.613686 dockerd[1827]: time="2025-07-11T05:25:19.613639665Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 05:25:19.636010 dockerd[1827]: time="2025-07-11T05:25:19.635955056Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 11 05:25:20.060279 dockerd[1827]: time="2025-07-11T05:25:20.060183594Z" level=info msg="Loading containers: start." Jul 11 05:25:20.073775 kernel: Initializing XFRM netlink socket Jul 11 05:25:20.442029 systemd-networkd[1492]: docker0: Link UP Jul 11 05:25:20.448002 dockerd[1827]: time="2025-07-11T05:25:20.447963532Z" level=info msg="Loading containers: done." Jul 11 05:25:20.463892 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck660412517-merged.mount: Deactivated successfully. Jul 11 05:25:20.465675 dockerd[1827]: time="2025-07-11T05:25:20.465613195Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 05:25:20.465756 dockerd[1827]: time="2025-07-11T05:25:20.465743860Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 11 05:25:20.465860 dockerd[1827]: time="2025-07-11T05:25:20.465836383Z" level=info msg="Initializing buildkit" Jul 11 05:25:20.495458 dockerd[1827]: time="2025-07-11T05:25:20.495427026Z" level=info msg="Completed buildkit initialization" Jul 11 05:25:20.501815 dockerd[1827]: time="2025-07-11T05:25:20.501764689Z" level=info msg="Daemon has completed initialization" Jul 11 05:25:20.501938 dockerd[1827]: time="2025-07-11T05:25:20.501857162Z" level=info msg="API listen on /run/docker.sock" Jul 11 05:25:20.502070 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 05:25:21.440465 containerd[1591]: time="2025-07-11T05:25:21.440401362Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 05:25:22.127918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812983052.mount: Deactivated successfully. Jul 11 05:25:23.877345 containerd[1591]: time="2025-07-11T05:25:23.877268668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:23.878155 containerd[1591]: time="2025-07-11T05:25:23.878087423Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 11 05:25:23.879388 containerd[1591]: time="2025-07-11T05:25:23.879339360Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:23.882492 containerd[1591]: time="2025-07-11T05:25:23.882461524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:23.883618 containerd[1591]: time="2025-07-11T05:25:23.883552851Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.443099221s" Jul 11 05:25:23.883618 containerd[1591]: time="2025-07-11T05:25:23.883609086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 11 05:25:23.884644 containerd[1591]: time="2025-07-11T05:25:23.884587821Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 05:25:25.469939 containerd[1591]: time="2025-07-11T05:25:25.469859966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:25.470496 containerd[1591]: time="2025-07-11T05:25:25.470432289Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 11 05:25:25.471623 containerd[1591]: time="2025-07-11T05:25:25.471585852Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:25.475009 containerd[1591]: time="2025-07-11T05:25:25.474955150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:25.475778 containerd[1591]: time="2025-07-11T05:25:25.475712590Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.591086277s" Jul 11 05:25:25.475846 containerd[1591]: time="2025-07-11T05:25:25.475779756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 11 05:25:25.476493 containerd[1591]: time="2025-07-11T05:25:25.476441386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 05:25:26.615011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 05:25:26.617133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:26.968629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:26.995280 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:25:27.407498 kubelet[2114]: E0711 05:25:27.407412 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:25:27.415389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:25:27.415605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:25:27.416023 systemd[1]: kubelet.service: Consumed 397ms CPU time, 111.1M memory peak. Jul 11 05:25:27.447526 containerd[1591]: time="2025-07-11T05:25:27.447469654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:27.521871 containerd[1591]: time="2025-07-11T05:25:27.521802207Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 11 05:25:27.523308 containerd[1591]: time="2025-07-11T05:25:27.523236266Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:27.526546 containerd[1591]: time="2025-07-11T05:25:27.526511147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:27.527510 containerd[1591]: time="2025-07-11T05:25:27.527463723Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.050981199s" Jul 11 05:25:27.527510 containerd[1591]: time="2025-07-11T05:25:27.527507344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 11 05:25:27.528135 containerd[1591]: time="2025-07-11T05:25:27.528091530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 05:25:28.976947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673368298.mount: Deactivated successfully. Jul 11 05:25:29.875002 containerd[1591]: time="2025-07-11T05:25:29.874922768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:29.875815 containerd[1591]: time="2025-07-11T05:25:29.875766339Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 11 05:25:29.877241 containerd[1591]: time="2025-07-11T05:25:29.877212551Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:29.879200 containerd[1591]: time="2025-07-11T05:25:29.879166735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:29.879609 containerd[1591]: time="2025-07-11T05:25:29.879579349Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.351458204s" Jul 11 05:25:29.879645 containerd[1591]: time="2025-07-11T05:25:29.879605658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 11 05:25:29.880208 containerd[1591]: time="2025-07-11T05:25:29.880181187Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 05:25:30.458029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917889527.mount: Deactivated successfully. Jul 11 05:25:31.346500 containerd[1591]: time="2025-07-11T05:25:31.346430084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:31.347518 containerd[1591]: time="2025-07-11T05:25:31.347462680Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 05:25:31.348876 containerd[1591]: time="2025-07-11T05:25:31.348823732Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:31.351396 containerd[1591]: time="2025-07-11T05:25:31.351351381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:31.352423 containerd[1591]: time="2025-07-11T05:25:31.352389267Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.472177683s" Jul 11 05:25:31.352467 containerd[1591]: time="2025-07-11T05:25:31.352424653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 05:25:31.352990 containerd[1591]: time="2025-07-11T05:25:31.352949327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 05:25:32.456496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846922706.mount: Deactivated successfully. Jul 11 05:25:32.463024 containerd[1591]: time="2025-07-11T05:25:32.462960558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:25:32.463800 containerd[1591]: time="2025-07-11T05:25:32.463754176Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 05:25:32.465050 containerd[1591]: time="2025-07-11T05:25:32.464977239Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:25:32.467185 containerd[1591]: time="2025-07-11T05:25:32.467127581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:25:32.467878 containerd[1591]: time="2025-07-11T05:25:32.467830729Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.114850254s" Jul 11 05:25:32.467878 containerd[1591]: time="2025-07-11T05:25:32.467866987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 05:25:32.468417 containerd[1591]: time="2025-07-11T05:25:32.468389828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 05:25:33.815477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687194771.mount: Deactivated successfully. Jul 11 05:25:35.612262 containerd[1591]: time="2025-07-11T05:25:35.612180434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:35.612903 containerd[1591]: time="2025-07-11T05:25:35.612853566Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 11 05:25:35.614086 containerd[1591]: time="2025-07-11T05:25:35.614058476Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:35.616525 containerd[1591]: time="2025-07-11T05:25:35.616461201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:35.617430 containerd[1591]: time="2025-07-11T05:25:35.617397015Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.148979796s" Jul 11 05:25:35.617492 containerd[1591]: time="2025-07-11T05:25:35.617430348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 11 05:25:37.480683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 05:25:37.482348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:37.697938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:37.708994 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:25:37.902014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:37.962262 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 05:25:37.962552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:37.962787 systemd[1]: kubelet.service: Consumed 417ms CPU time, 106.6M memory peak. Jul 11 05:25:37.965157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:37.990447 systemd[1]: Reload requested from client PID 2287 ('systemctl') (unit session-7.scope)... Jul 11 05:25:37.990462 systemd[1]: Reloading... Jul 11 05:25:38.077841 zram_generator::config[2329]: No configuration found. Jul 11 05:25:38.329910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:25:38.446219 systemd[1]: Reloading finished in 455 ms. Jul 11 05:25:38.515888 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 05:25:38.515997 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 05:25:38.516318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:38.516366 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.3M memory peak. Jul 11 05:25:38.517986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:38.689397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:38.704069 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:25:38.746018 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:25:38.746018 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 05:25:38.746018 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:25:38.746388 kubelet[2377]: I0711 05:25:38.746081 2377 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:25:39.164770 kubelet[2377]: I0711 05:25:39.164298 2377 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 05:25:39.164770 kubelet[2377]: I0711 05:25:39.164352 2377 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:25:39.165392 kubelet[2377]: I0711 05:25:39.165353 2377 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 05:25:39.188403 kubelet[2377]: I0711 05:25:39.188355 2377 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:25:39.189298 kubelet[2377]: E0711 05:25:39.189259 2377 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:39.194240 kubelet[2377]: I0711 05:25:39.194219 2377 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:25:39.199409 kubelet[2377]: I0711 05:25:39.199373 2377 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:25:39.201051 kubelet[2377]: I0711 05:25:39.201005 2377 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:25:39.201238 kubelet[2377]: I0711 05:25:39.201036 2377 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:25:39.201238 kubelet[2377]: I0711 05:25:39.201224 2377 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:25:39.201238 kubelet[2377]: I0711 05:25:39.201232 2377 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 05:25:39.201386 kubelet[2377]: I0711 05:25:39.201374 2377 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:25:39.205812 kubelet[2377]: I0711 05:25:39.205768 2377 kubelet.go:446] "Attempting to sync node with API server" Jul 11 05:25:39.205812 kubelet[2377]: I0711 05:25:39.205810 2377 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:25:39.205894 kubelet[2377]: I0711 05:25:39.205848 2377 kubelet.go:352] "Adding apiserver pod source" Jul 11 05:25:39.205894 kubelet[2377]: I0711 05:25:39.205866 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:25:39.207339 kubelet[2377]: W0711 05:25:39.207241 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:39.207339 kubelet[2377]: E0711 05:25:39.207302 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:39.208060 kubelet[2377]: W0711 05:25:39.208016 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:39.208120 kubelet[2377]: E0711 05:25:39.208076 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:39.209149 kubelet[2377]: I0711 05:25:39.209047 2377 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:25:39.209445 kubelet[2377]: I0711 05:25:39.209419 2377 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:25:39.209513 kubelet[2377]: W0711 05:25:39.209500 2377 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 05:25:39.211199 kubelet[2377]: I0711 05:25:39.211170 2377 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 05:25:39.211257 kubelet[2377]: I0711 05:25:39.211211 2377 server.go:1287] "Started kubelet" Jul 11 05:25:39.213011 kubelet[2377]: I0711 05:25:39.212571 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:25:39.214054 kubelet[2377]: I0711 05:25:39.213438 2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:25:39.214054 kubelet[2377]: I0711 05:25:39.213827 2377 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:25:39.214054 kubelet[2377]: I0711 05:25:39.213902 2377 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:25:39.215452 kubelet[2377]: I0711 05:25:39.214951 2377 server.go:479] "Adding debug handlers to kubelet server" Jul 11 05:25:39.216292 kubelet[2377]: I0711 05:25:39.216273 2377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:25:39.217147 kubelet[2377]: E0711 05:25:39.217007 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:39.217147 kubelet[2377]: I0711 05:25:39.217053 2377 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 05:25:39.217147 kubelet[2377]: E0711 05:25:39.215892 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18511b1c7e05eb5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 05:25:39.211184987 +0000 UTC m=+0.503191425,LastTimestamp:2025-07-11 05:25:39.211184987 +0000 UTC m=+0.503191425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 05:25:39.217351 kubelet[2377]: I0711 05:25:39.217321 2377 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 05:25:39.217424 kubelet[2377]: I0711 05:25:39.217400 2377 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:25:39.218091 kubelet[2377]: W0711 05:25:39.218044 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:39.218146 kubelet[2377]: E0711 05:25:39.218109 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:39.218424 kubelet[2377]: E0711 05:25:39.218388 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jul 11 05:25:39.218965 kubelet[2377]: E0711 05:25:39.218935 2377 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 05:25:39.219549 kubelet[2377]: I0711 05:25:39.219522 2377 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:25:39.219549 kubelet[2377]: I0711 05:25:39.219540 2377 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:25:39.219678 kubelet[2377]: I0711 05:25:39.219615 2377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:25:39.229207 kubelet[2377]: I0711 05:25:39.228890 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:25:39.231948 kubelet[2377]: I0711 05:25:39.231930 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:25:39.232038 kubelet[2377]: I0711 05:25:39.232027 2377 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 05:25:39.232119 kubelet[2377]: I0711 05:25:39.232108 2377 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 05:25:39.232182 kubelet[2377]: I0711 05:25:39.232172 2377 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 05:25:39.232308 kubelet[2377]: E0711 05:25:39.232291 2377 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:25:39.234076 kubelet[2377]: W0711 05:25:39.233985 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:39.234076 kubelet[2377]: E0711 05:25:39.234045 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:39.236603 kubelet[2377]: I0711 05:25:39.236572 2377 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 05:25:39.236603 kubelet[2377]: I0711 05:25:39.236595 2377 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 05:25:39.236680 kubelet[2377]: I0711 05:25:39.236618 2377 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:25:39.317515 kubelet[2377]: E0711 05:25:39.317460 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:39.332830 kubelet[2377]: E0711 05:25:39.332788 2377 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 05:25:39.418126 kubelet[2377]: E0711 05:25:39.418023 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:39.419516 kubelet[2377]: E0711 05:25:39.419482 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jul 11 05:25:39.518877 kubelet[2377]: E0711 05:25:39.518809 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:39.533063 kubelet[2377]: E0711 05:25:39.533012 2377 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 05:25:39.568421 kubelet[2377]: I0711 05:25:39.568362 2377 policy_none.go:49] "None policy: Start" Jul 11 05:25:39.568421 kubelet[2377]: I0711 05:25:39.568413 2377 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 05:25:39.568421 kubelet[2377]: I0711 05:25:39.568434 2377 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:25:39.580361 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 05:25:39.601984 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 05:25:39.605062 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 05:25:39.616625 kubelet[2377]: I0711 05:25:39.616590 2377 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:25:39.617045 kubelet[2377]: I0711 05:25:39.616836 2377 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:25:39.617045 kubelet[2377]: I0711 05:25:39.616855 2377 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:25:39.617045 kubelet[2377]: I0711 05:25:39.617046 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:25:39.618028 kubelet[2377]: E0711 05:25:39.617979 2377 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 05:25:39.618172 kubelet[2377]: E0711 05:25:39.618062 2377 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 05:25:39.718866 kubelet[2377]: I0711 05:25:39.718742 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:25:39.719309 kubelet[2377]: E0711 05:25:39.719227 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 11 05:25:39.820604 kubelet[2377]: E0711 05:25:39.820552 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jul 11 05:25:39.921131 kubelet[2377]: I0711 05:25:39.921098 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:25:39.921451 kubelet[2377]: E0711 05:25:39.921418 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 11 05:25:39.942240 systemd[1]: Created slice kubepods-burstable-pod889d267cbf0559b1fbfcc2d7c0515461.slice - libcontainer container kubepods-burstable-pod889d267cbf0559b1fbfcc2d7c0515461.slice. Jul 11 05:25:39.962751 kubelet[2377]: E0711 05:25:39.962707 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:39.965829 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 05:25:39.967795 kubelet[2377]: E0711 05:25:39.967766 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:39.969887 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 05:25:39.971638 kubelet[2377]: E0711 05:25:39.971605 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:40.021378 kubelet[2377]: I0711 05:25:40.021312 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/889d267cbf0559b1fbfcc2d7c0515461-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"889d267cbf0559b1fbfcc2d7c0515461\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:40.021378 kubelet[2377]: I0711 05:25:40.021370 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/889d267cbf0559b1fbfcc2d7c0515461-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"889d267cbf0559b1fbfcc2d7c0515461\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:40.021526 kubelet[2377]: I0711 05:25:40.021411 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:40.021526 kubelet[2377]: I0711 05:25:40.021443 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:40.021526 kubelet[2377]: I0711 05:25:40.021486 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/889d267cbf0559b1fbfcc2d7c0515461-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"889d267cbf0559b1fbfcc2d7c0515461\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:40.021526 kubelet[2377]: I0711 05:25:40.021512 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:40.021619 kubelet[2377]: I0711 05:25:40.021547 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:40.021619 kubelet[2377]: I0711 05:25:40.021570 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:40.021668 kubelet[2377]: I0711 05:25:40.021626 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:40.093014 kubelet[2377]: W0711 05:25:40.092896 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:40.093014 kubelet[2377]: E0711 05:25:40.093004 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:40.237966 kubelet[2377]: W0711 05:25:40.237924 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:40.238064 kubelet[2377]: E0711 05:25:40.237973 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:40.264850 containerd[1591]: time="2025-07-11T05:25:40.264789215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:889d267cbf0559b1fbfcc2d7c0515461,Namespace:kube-system,Attempt:0,}" Jul 11 05:25:40.270137 containerd[1591]: time="2025-07-11T05:25:40.269793898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 05:25:40.273369 containerd[1591]: time="2025-07-11T05:25:40.273331892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 05:25:40.312897 containerd[1591]: time="2025-07-11T05:25:40.307554469Z" level=info msg="connecting to shim e9c58af00c8099026379c0a7aa02d1ddca609bef75ad1a81455db4808c442c93" address="unix:///run/containerd/s/1bcb6b134871ba0b16bdc3e77688badd2956d9abcaca6eca1b43e0d24afe97c3" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:25:40.314127 containerd[1591]: time="2025-07-11T05:25:40.314082009Z" level=info msg="connecting to shim 33f6f7a00193166693defe7cd6311a2172b318ec4bafa2a83af7855ad65f567f" address="unix:///run/containerd/s/51e62c8af60b501d8a54c4c5ab5f3e1ac34c7efe46cfa6dc4ebbed6b06d25f79" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:25:40.323113 kubelet[2377]: I0711 05:25:40.323058 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:25:40.323635 kubelet[2377]: E0711 05:25:40.323594 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 11 05:25:40.329542 containerd[1591]: time="2025-07-11T05:25:40.329450895Z" level=info msg="connecting to shim 7ca844010a4d0aedfcedf3789db935075c224a6121e957856b5c5d50af7014dc" address="unix:///run/containerd/s/f483b35df874c7478d67f00befc8d88e2fc0f46955eba1c5ec4064ec93ab266f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:25:40.422389 kubelet[2377]: W0711 05:25:40.422127 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:40.422389 kubelet[2377]: E0711 05:25:40.422332 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:40.467990 systemd[1]: Started cri-containerd-e9c58af00c8099026379c0a7aa02d1ddca609bef75ad1a81455db4808c442c93.scope - libcontainer container e9c58af00c8099026379c0a7aa02d1ddca609bef75ad1a81455db4808c442c93. Jul 11 05:25:40.472880 systemd[1]: Started cri-containerd-33f6f7a00193166693defe7cd6311a2172b318ec4bafa2a83af7855ad65f567f.scope - libcontainer container 33f6f7a00193166693defe7cd6311a2172b318ec4bafa2a83af7855ad65f567f. Jul 11 05:25:40.474820 systemd[1]: Started cri-containerd-7ca844010a4d0aedfcedf3789db935075c224a6121e957856b5c5d50af7014dc.scope - libcontainer container 7ca844010a4d0aedfcedf3789db935075c224a6121e957856b5c5d50af7014dc. Jul 11 05:25:40.522108 containerd[1591]: time="2025-07-11T05:25:40.521993452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9c58af00c8099026379c0a7aa02d1ddca609bef75ad1a81455db4808c442c93\"" Jul 11 05:25:40.529825 containerd[1591]: time="2025-07-11T05:25:40.529721302Z" level=info msg="CreateContainer within sandbox \"e9c58af00c8099026379c0a7aa02d1ddca609bef75ad1a81455db4808c442c93\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 05:25:40.537325 containerd[1591]: time="2025-07-11T05:25:40.537264025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:889d267cbf0559b1fbfcc2d7c0515461,Namespace:kube-system,Attempt:0,} returns sandbox id \"33f6f7a00193166693defe7cd6311a2172b318ec4bafa2a83af7855ad65f567f\"" Jul 11 05:25:40.541283 containerd[1591]: time="2025-07-11T05:25:40.541243316Z" level=info msg="CreateContainer within sandbox \"33f6f7a00193166693defe7cd6311a2172b318ec4bafa2a83af7855ad65f567f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 05:25:40.542770 containerd[1591]: time="2025-07-11T05:25:40.542721718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ca844010a4d0aedfcedf3789db935075c224a6121e957856b5c5d50af7014dc\"" Jul 11 05:25:40.542863 containerd[1591]: time="2025-07-11T05:25:40.542810444Z" level=info msg="Container 5ebf0192b33f4eeb8bc3c46e03b787d8f90ee65987f7382edb76ca4b56672e83: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:40.544828 containerd[1591]: time="2025-07-11T05:25:40.544786940Z" level=info msg="CreateContainer within sandbox \"7ca844010a4d0aedfcedf3789db935075c224a6121e957856b5c5d50af7014dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 05:25:40.551166 containerd[1591]: time="2025-07-11T05:25:40.551127739Z" level=info msg="CreateContainer within sandbox \"e9c58af00c8099026379c0a7aa02d1ddca609bef75ad1a81455db4808c442c93\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ebf0192b33f4eeb8bc3c46e03b787d8f90ee65987f7382edb76ca4b56672e83\"" Jul 11 05:25:40.551709 containerd[1591]: time="2025-07-11T05:25:40.551675286Z" level=info msg="StartContainer for \"5ebf0192b33f4eeb8bc3c46e03b787d8f90ee65987f7382edb76ca4b56672e83\"" Jul 11 05:25:40.552661 containerd[1591]: time="2025-07-11T05:25:40.552622913Z" level=info msg="Container 5d8db826e2e364575d696b65ade3b995c0cff7394177cdd70d877d6312028db3: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:40.553157 containerd[1591]: time="2025-07-11T05:25:40.553129472Z" level=info msg="connecting to shim 5ebf0192b33f4eeb8bc3c46e03b787d8f90ee65987f7382edb76ca4b56672e83" address="unix:///run/containerd/s/1bcb6b134871ba0b16bdc3e77688badd2956d9abcaca6eca1b43e0d24afe97c3" protocol=ttrpc version=3 Jul 11 05:25:40.558044 containerd[1591]: time="2025-07-11T05:25:40.557993553Z" level=info msg="Container b5b084a8a20c8f1fe364f9914895099ded82b171104d27381920cb06344043b4: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:40.565461 containerd[1591]: time="2025-07-11T05:25:40.565410870Z" level=info msg="CreateContainer within sandbox \"7ca844010a4d0aedfcedf3789db935075c224a6121e957856b5c5d50af7014dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b5b084a8a20c8f1fe364f9914895099ded82b171104d27381920cb06344043b4\"" Jul 11 05:25:40.566749 containerd[1591]: time="2025-07-11T05:25:40.565950342Z" level=info msg="StartContainer for \"b5b084a8a20c8f1fe364f9914895099ded82b171104d27381920cb06344043b4\"" Jul 11 05:25:40.567330 containerd[1591]: time="2025-07-11T05:25:40.567304400Z" level=info msg="connecting to shim b5b084a8a20c8f1fe364f9914895099ded82b171104d27381920cb06344043b4" address="unix:///run/containerd/s/f483b35df874c7478d67f00befc8d88e2fc0f46955eba1c5ec4064ec93ab266f" protocol=ttrpc version=3 Jul 11 05:25:40.567584 containerd[1591]: time="2025-07-11T05:25:40.567543509Z" level=info msg="CreateContainer within sandbox \"33f6f7a00193166693defe7cd6311a2172b318ec4bafa2a83af7855ad65f567f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5d8db826e2e364575d696b65ade3b995c0cff7394177cdd70d877d6312028db3\"" Jul 11 05:25:40.568217 containerd[1591]: time="2025-07-11T05:25:40.568119699Z" level=info msg="StartContainer for \"5d8db826e2e364575d696b65ade3b995c0cff7394177cdd70d877d6312028db3\"" Jul 11 05:25:40.569193 containerd[1591]: time="2025-07-11T05:25:40.569169237Z" level=info msg="connecting to shim 5d8db826e2e364575d696b65ade3b995c0cff7394177cdd70d877d6312028db3" address="unix:///run/containerd/s/51e62c8af60b501d8a54c4c5ab5f3e1ac34c7efe46cfa6dc4ebbed6b06d25f79" protocol=ttrpc version=3 Jul 11 05:25:40.576106 systemd[1]: Started cri-containerd-5ebf0192b33f4eeb8bc3c46e03b787d8f90ee65987f7382edb76ca4b56672e83.scope - libcontainer container 5ebf0192b33f4eeb8bc3c46e03b787d8f90ee65987f7382edb76ca4b56672e83. Jul 11 05:25:40.598876 systemd[1]: Started cri-containerd-5d8db826e2e364575d696b65ade3b995c0cff7394177cdd70d877d6312028db3.scope - libcontainer container 5d8db826e2e364575d696b65ade3b995c0cff7394177cdd70d877d6312028db3. Jul 11 05:25:40.603268 systemd[1]: Started cri-containerd-b5b084a8a20c8f1fe364f9914895099ded82b171104d27381920cb06344043b4.scope - libcontainer container b5b084a8a20c8f1fe364f9914895099ded82b171104d27381920cb06344043b4. Jul 11 05:25:40.621232 kubelet[2377]: E0711 05:25:40.621179 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Jul 11 05:25:40.711775 kubelet[2377]: W0711 05:25:40.707808 2377 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 11 05:25:40.711775 kubelet[2377]: E0711 05:25:40.708119 2377 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:25:41.125265 kubelet[2377]: I0711 05:25:41.125217 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:25:41.195438 containerd[1591]: time="2025-07-11T05:25:41.195386104Z" level=info msg="StartContainer for \"b5b084a8a20c8f1fe364f9914895099ded82b171104d27381920cb06344043b4\" returns successfully" Jul 11 05:25:41.196967 containerd[1591]: time="2025-07-11T05:25:41.196911824Z" level=info msg="StartContainer for \"5d8db826e2e364575d696b65ade3b995c0cff7394177cdd70d877d6312028db3\" returns successfully" Jul 11 05:25:41.198463 containerd[1591]: time="2025-07-11T05:25:41.198434329Z" level=info msg="StartContainer for \"5ebf0192b33f4eeb8bc3c46e03b787d8f90ee65987f7382edb76ca4b56672e83\" returns successfully" Jul 11 05:25:41.245448 kubelet[2377]: E0711 05:25:41.245258 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:41.247894 kubelet[2377]: E0711 05:25:41.247867 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:41.250831 kubelet[2377]: E0711 05:25:41.250776 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:42.253087 kubelet[2377]: E0711 05:25:42.252788 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:42.253087 kubelet[2377]: E0711 05:25:42.252972 2377 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:25:42.352481 kubelet[2377]: I0711 05:25:42.352429 2377 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 05:25:42.352481 kubelet[2377]: E0711 05:25:42.352469 2377 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 05:25:42.452281 kubelet[2377]: E0711 05:25:42.452236 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:42.552903 kubelet[2377]: E0711 05:25:42.552719 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:42.653448 kubelet[2377]: E0711 05:25:42.653395 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:42.754529 kubelet[2377]: E0711 05:25:42.754472 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:42.855218 kubelet[2377]: E0711 05:25:42.855115 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:42.955717 kubelet[2377]: E0711 05:25:42.955667 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:43.221822 kubelet[2377]: I0711 05:25:43.221275 2377 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:43.221822 kubelet[2377]: I0711 05:25:43.221470 2377 apiserver.go:52] "Watching apiserver" Jul 11 05:25:43.232127 kubelet[2377]: I0711 05:25:43.232039 2377 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:43.236759 kubelet[2377]: I0711 05:25:43.236607 2377 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:43.253053 kubelet[2377]: I0711 05:25:43.253008 2377 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:43.257447 kubelet[2377]: E0711 05:25:43.257407 2377 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:43.318235 kubelet[2377]: I0711 05:25:43.318180 2377 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 05:25:44.633773 systemd[1]: Reload requested from client PID 2650 ('systemctl') (unit session-7.scope)... Jul 11 05:25:44.633789 systemd[1]: Reloading... Jul 11 05:25:44.706759 zram_generator::config[2693]: No configuration found. Jul 11 05:25:44.825928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:25:44.963242 systemd[1]: Reloading finished in 329 ms. Jul 11 05:25:44.989755 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:45.009012 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 05:25:45.009340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:45.009391 systemd[1]: kubelet.service: Consumed 978ms CPU time, 131.6M memory peak. Jul 11 05:25:45.011254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:25:45.223663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:25:45.235114 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:25:45.278182 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:25:45.278182 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 05:25:45.278182 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:25:45.278677 kubelet[2738]: I0711 05:25:45.278249 2738 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:25:45.286960 kubelet[2738]: I0711 05:25:45.286925 2738 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 05:25:45.286960 kubelet[2738]: I0711 05:25:45.286951 2738 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:25:45.287314 kubelet[2738]: I0711 05:25:45.287279 2738 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 05:25:45.288893 kubelet[2738]: I0711 05:25:45.288873 2738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 05:25:45.292411 kubelet[2738]: I0711 05:25:45.292328 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:25:45.299628 kubelet[2738]: I0711 05:25:45.299583 2738 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:25:45.306430 kubelet[2738]: I0711 05:25:45.306401 2738 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:25:45.306668 kubelet[2738]: I0711 05:25:45.306626 2738 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:25:45.306861 kubelet[2738]: I0711 05:25:45.306662 2738 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:25:45.307031 kubelet[2738]: I0711 05:25:45.306877 2738 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:25:45.307031 kubelet[2738]: I0711 05:25:45.306900 2738 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 05:25:45.307031 kubelet[2738]: I0711 05:25:45.306992 2738 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:25:45.307264 kubelet[2738]: I0711 05:25:45.307238 2738 kubelet.go:446] "Attempting to sync node with API server" Jul 11 05:25:45.307292 kubelet[2738]: I0711 05:25:45.307271 2738 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:25:45.307317 kubelet[2738]: I0711 05:25:45.307297 2738 kubelet.go:352] "Adding apiserver pod source" Jul 11 05:25:45.307317 kubelet[2738]: I0711 05:25:45.307308 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:25:45.308767 kubelet[2738]: I0711 05:25:45.308496 2738 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:25:45.309114 kubelet[2738]: I0711 05:25:45.309100 2738 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:25:45.309570 kubelet[2738]: I0711 05:25:45.309530 2738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 05:25:45.309570 kubelet[2738]: I0711 05:25:45.309564 2738 server.go:1287] "Started kubelet" Jul 11 05:25:45.309750 kubelet[2738]: I0711 05:25:45.309684 2738 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:25:45.310021 kubelet[2738]: I0711 05:25:45.309905 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:25:45.310378 kubelet[2738]: I0711 05:25:45.310325 2738 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:25:45.313583 kubelet[2738]: I0711 05:25:45.313527 2738 server.go:479] "Adding debug handlers to kubelet server" Jul 11 05:25:45.315836 kubelet[2738]: E0711 05:25:45.315793 2738 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 05:25:45.316477 kubelet[2738]: I0711 05:25:45.316295 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:25:45.317797 kubelet[2738]: I0711 05:25:45.317770 2738 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 05:25:45.320888 kubelet[2738]: E0711 05:25:45.319022 2738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:25:45.321483 kubelet[2738]: I0711 05:25:45.321451 2738 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:25:45.322898 kubelet[2738]: I0711 05:25:45.322871 2738 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:25:45.322972 kubelet[2738]: I0711 05:25:45.322937 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:25:45.325821 kubelet[2738]: I0711 05:25:45.325795 2738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 05:25:45.326046 kubelet[2738]: I0711 05:25:45.326017 2738 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:25:45.330591 kubelet[2738]: I0711 05:25:45.330515 2738 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:25:45.339705 kubelet[2738]: I0711 05:25:45.339660 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:25:45.341452 kubelet[2738]: I0711 05:25:45.341431 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:25:45.341502 kubelet[2738]: I0711 05:25:45.341482 2738 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 05:25:45.341527 kubelet[2738]: I0711 05:25:45.341506 2738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 05:25:45.341527 kubelet[2738]: I0711 05:25:45.341513 2738 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 05:25:45.341580 kubelet[2738]: E0711 05:25:45.341556 2738 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:25:45.370110 kubelet[2738]: I0711 05:25:45.370085 2738 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 05:25:45.370110 kubelet[2738]: I0711 05:25:45.370099 2738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 05:25:45.370110 kubelet[2738]: I0711 05:25:45.370117 2738 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:25:45.370286 kubelet[2738]: I0711 05:25:45.370267 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 05:25:45.370311 kubelet[2738]: I0711 05:25:45.370283 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 05:25:45.370311 kubelet[2738]: I0711 05:25:45.370301 2738 policy_none.go:49] "None policy: Start" Jul 11 05:25:45.370311 kubelet[2738]: I0711 05:25:45.370311 2738 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 05:25:45.370375 kubelet[2738]: I0711 05:25:45.370319 2738 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:25:45.370417 kubelet[2738]: I0711 05:25:45.370407 2738 state_mem.go:75] "Updated machine memory state" Jul 11 05:25:45.374212 kubelet[2738]: I0711 05:25:45.374181 2738 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:25:45.374347 kubelet[2738]: I0711 05:25:45.374333 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:25:45.374376 kubelet[2738]: I0711 05:25:45.374346 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:25:45.374512 kubelet[2738]: I0711 05:25:45.374495 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:25:45.375453 kubelet[2738]: E0711 05:25:45.375431 2738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 05:25:45.443134 kubelet[2738]: I0711 05:25:45.443081 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:45.443134 kubelet[2738]: I0711 05:25:45.443106 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:45.443134 kubelet[2738]: I0711 05:25:45.443133 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:45.476698 kubelet[2738]: I0711 05:25:45.476563 2738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:25:45.552568 kubelet[2738]: E0711 05:25:45.552441 2738 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:45.552568 kubelet[2738]: E0711 05:25:45.552445 2738 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:45.553365 kubelet[2738]: E0711 05:25:45.552913 2738 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:45.554010 kubelet[2738]: I0711 05:25:45.553981 2738 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 05:25:45.554073 kubelet[2738]: I0711 05:25:45.554052 2738 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 05:25:45.627340 kubelet[2738]: I0711 05:25:45.627294 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/889d267cbf0559b1fbfcc2d7c0515461-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"889d267cbf0559b1fbfcc2d7c0515461\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:45.627523 kubelet[2738]: I0711 05:25:45.627344 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:45.627523 kubelet[2738]: I0711 05:25:45.627374 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:45.627523 kubelet[2738]: I0711 05:25:45.627399 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:45.627523 kubelet[2738]: I0711 05:25:45.627412 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/889d267cbf0559b1fbfcc2d7c0515461-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"889d267cbf0559b1fbfcc2d7c0515461\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:45.627523 kubelet[2738]: I0711 05:25:45.627430 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/889d267cbf0559b1fbfcc2d7c0515461-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"889d267cbf0559b1fbfcc2d7c0515461\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:45.627765 kubelet[2738]: I0711 05:25:45.627472 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:45.627765 kubelet[2738]: I0711 05:25:45.627541 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:25:45.627765 kubelet[2738]: I0711 05:25:45.627603 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:46.307540 kubelet[2738]: I0711 05:25:46.307482 2738 apiserver.go:52] "Watching apiserver" Jul 11 05:25:46.326227 kubelet[2738]: I0711 05:25:46.326187 2738 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 05:25:46.356995 kubelet[2738]: I0711 05:25:46.356838 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:46.356995 kubelet[2738]: I0711 05:25:46.356905 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:46.576637 kubelet[2738]: E0711 05:25:46.576454 2738 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 05:25:46.577588 kubelet[2738]: E0711 05:25:46.577556 2738 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 05:25:46.578077 kubelet[2738]: I0711 05:25:46.577984 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.577922709 podStartE2EDuration="3.577922709s" podCreationTimestamp="2025-07-11 05:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:25:46.577190351 +0000 UTC m=+1.337379399" watchObservedRunningTime="2025-07-11 05:25:46.577922709 +0000 UTC m=+1.338111757" Jul 11 05:25:46.585191 kubelet[2738]: I0711 05:25:46.585109 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.585085087 podStartE2EDuration="3.585085087s" podCreationTimestamp="2025-07-11 05:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:25:46.585080489 +0000 UTC m=+1.345269537" watchObservedRunningTime="2025-07-11 05:25:46.585085087 +0000 UTC m=+1.345274136" Jul 11 05:25:46.600757 kubelet[2738]: I0711 05:25:46.600019 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5999997 podStartE2EDuration="3.5999997s" podCreationTimestamp="2025-07-11 05:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:25:46.592123087 +0000 UTC m=+1.352312135" watchObservedRunningTime="2025-07-11 05:25:46.5999997 +0000 UTC m=+1.360188748" Jul 11 05:25:49.663375 kubelet[2738]: I0711 05:25:49.663328 2738 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 05:25:49.663921 kubelet[2738]: I0711 05:25:49.663849 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 05:25:49.663953 containerd[1591]: time="2025-07-11T05:25:49.663672170Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 05:25:50.282468 systemd[1]: Created slice kubepods-besteffort-pod582e5222_5460_4716_af41_cd2c5eb13288.slice - libcontainer container kubepods-besteffort-pod582e5222_5460_4716_af41_cd2c5eb13288.slice. Jul 11 05:25:50.462712 kubelet[2738]: I0711 05:25:50.462630 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxc8\" (UniqueName: \"kubernetes.io/projected/582e5222-5460-4716-af41-cd2c5eb13288-kube-api-access-clxc8\") pod \"kube-proxy-5g9lz\" (UID: \"582e5222-5460-4716-af41-cd2c5eb13288\") " pod="kube-system/kube-proxy-5g9lz" Jul 11 05:25:50.462712 kubelet[2738]: I0711 05:25:50.462695 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/582e5222-5460-4716-af41-cd2c5eb13288-kube-proxy\") pod \"kube-proxy-5g9lz\" (UID: \"582e5222-5460-4716-af41-cd2c5eb13288\") " pod="kube-system/kube-proxy-5g9lz" Jul 11 05:25:50.462712 kubelet[2738]: I0711 05:25:50.462717 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/582e5222-5460-4716-af41-cd2c5eb13288-xtables-lock\") pod \"kube-proxy-5g9lz\" (UID: \"582e5222-5460-4716-af41-cd2c5eb13288\") " pod="kube-system/kube-proxy-5g9lz" Jul 11 05:25:50.462985 kubelet[2738]: I0711 05:25:50.462760 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/582e5222-5460-4716-af41-cd2c5eb13288-lib-modules\") pod \"kube-proxy-5g9lz\" (UID: \"582e5222-5460-4716-af41-cd2c5eb13288\") " pod="kube-system/kube-proxy-5g9lz" Jul 11 05:25:50.484403 systemd[1]: Created slice kubepods-besteffort-pod78c3cbe2_7bd1_4c6b_930c_fc94d4ba34c2.slice - libcontainer container kubepods-besteffort-pod78c3cbe2_7bd1_4c6b_930c_fc94d4ba34c2.slice. Jul 11 05:25:50.564263 kubelet[2738]: I0711 05:25:50.563886 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78c3cbe2-7bd1-4c6b-930c-fc94d4ba34c2-var-lib-calico\") pod \"tigera-operator-747864d56d-77l84\" (UID: \"78c3cbe2-7bd1-4c6b-930c-fc94d4ba34c2\") " pod="tigera-operator/tigera-operator-747864d56d-77l84" Jul 11 05:25:50.564263 kubelet[2738]: I0711 05:25:50.563960 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2msb\" (UniqueName: \"kubernetes.io/projected/78c3cbe2-7bd1-4c6b-930c-fc94d4ba34c2-kube-api-access-v2msb\") pod \"tigera-operator-747864d56d-77l84\" (UID: \"78c3cbe2-7bd1-4c6b-930c-fc94d4ba34c2\") " pod="tigera-operator/tigera-operator-747864d56d-77l84" Jul 11 05:25:50.595004 containerd[1591]: time="2025-07-11T05:25:50.594949470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5g9lz,Uid:582e5222-5460-4716-af41-cd2c5eb13288,Namespace:kube-system,Attempt:0,}" Jul 11 05:25:50.790881 containerd[1591]: time="2025-07-11T05:25:50.790805447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-77l84,Uid:78c3cbe2-7bd1-4c6b-930c-fc94d4ba34c2,Namespace:tigera-operator,Attempt:0,}" Jul 11 05:25:50.875930 containerd[1591]: time="2025-07-11T05:25:50.875706213Z" level=info msg="connecting to shim 6b239239d1778633a6e6cc0eb21f4faf0f462fb962710278766a6a0947691d52" address="unix:///run/containerd/s/4ae24133d46c2856dd82f9bc0321a88c5172047ba2ad2b0db9c2a722d47ade18" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:25:50.880626 containerd[1591]: time="2025-07-11T05:25:50.880176263Z" level=info msg="connecting to shim 411c733d961d8df2a568c077c6c26c9237b0246373955675109298619576c376" address="unix:///run/containerd/s/6772dd65cd8c1a902a298729781a3be4b4d54bf7316a0a8b3e1bb9573650140b" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:25:50.911874 systemd[1]: Started cri-containerd-411c733d961d8df2a568c077c6c26c9237b0246373955675109298619576c376.scope - libcontainer container 411c733d961d8df2a568c077c6c26c9237b0246373955675109298619576c376. Jul 11 05:25:50.915384 systemd[1]: Started cri-containerd-6b239239d1778633a6e6cc0eb21f4faf0f462fb962710278766a6a0947691d52.scope - libcontainer container 6b239239d1778633a6e6cc0eb21f4faf0f462fb962710278766a6a0947691d52. Jul 11 05:25:50.944270 containerd[1591]: time="2025-07-11T05:25:50.944235026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5g9lz,Uid:582e5222-5460-4716-af41-cd2c5eb13288,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b239239d1778633a6e6cc0eb21f4faf0f462fb962710278766a6a0947691d52\"" Jul 11 05:25:50.949131 containerd[1591]: time="2025-07-11T05:25:50.948509101Z" level=info msg="CreateContainer within sandbox \"6b239239d1778633a6e6cc0eb21f4faf0f462fb962710278766a6a0947691d52\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 05:25:50.963921 containerd[1591]: time="2025-07-11T05:25:50.963878027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-77l84,Uid:78c3cbe2-7bd1-4c6b-930c-fc94d4ba34c2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"411c733d961d8df2a568c077c6c26c9237b0246373955675109298619576c376\"" Jul 11 05:25:50.965830 containerd[1591]: time="2025-07-11T05:25:50.965579321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 05:25:50.966118 containerd[1591]: time="2025-07-11T05:25:50.966074849Z" level=info msg="Container 8fba90e0dd3c4758176869855cc55695d8ee58c92840f7ef080ee0339c96267a: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:50.975871 containerd[1591]: time="2025-07-11T05:25:50.975832903Z" level=info msg="CreateContainer within sandbox \"6b239239d1778633a6e6cc0eb21f4faf0f462fb962710278766a6a0947691d52\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8fba90e0dd3c4758176869855cc55695d8ee58c92840f7ef080ee0339c96267a\"" Jul 11 05:25:50.976461 containerd[1591]: time="2025-07-11T05:25:50.976405878Z" level=info msg="StartContainer for \"8fba90e0dd3c4758176869855cc55695d8ee58c92840f7ef080ee0339c96267a\"" Jul 11 05:25:50.978165 containerd[1591]: time="2025-07-11T05:25:50.978130677Z" level=info msg="connecting to shim 8fba90e0dd3c4758176869855cc55695d8ee58c92840f7ef080ee0339c96267a" address="unix:///run/containerd/s/4ae24133d46c2856dd82f9bc0321a88c5172047ba2ad2b0db9c2a722d47ade18" protocol=ttrpc version=3 Jul 11 05:25:51.000872 systemd[1]: Started cri-containerd-8fba90e0dd3c4758176869855cc55695d8ee58c92840f7ef080ee0339c96267a.scope - libcontainer container 8fba90e0dd3c4758176869855cc55695d8ee58c92840f7ef080ee0339c96267a. Jul 11 05:25:51.049197 containerd[1591]: time="2025-07-11T05:25:51.049095288Z" level=info msg="StartContainer for \"8fba90e0dd3c4758176869855cc55695d8ee58c92840f7ef080ee0339c96267a\" returns successfully" Jul 11 05:25:51.378413 kubelet[2738]: I0711 05:25:51.378334 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5g9lz" podStartSLOduration=1.378313987 podStartE2EDuration="1.378313987s" podCreationTimestamp="2025-07-11 05:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:25:51.378204638 +0000 UTC m=+6.138393686" watchObservedRunningTime="2025-07-11 05:25:51.378313987 +0000 UTC m=+6.138503035" Jul 11 05:25:52.392928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127512960.mount: Deactivated successfully. Jul 11 05:25:53.199913 containerd[1591]: time="2025-07-11T05:25:53.199849605Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:53.200590 containerd[1591]: time="2025-07-11T05:25:53.200555591Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 05:25:53.201766 containerd[1591]: time="2025-07-11T05:25:53.201715001Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:53.203508 containerd[1591]: time="2025-07-11T05:25:53.203478302Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:25:53.204133 containerd[1591]: time="2025-07-11T05:25:53.204101600Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.238496951s" Jul 11 05:25:53.204133 containerd[1591]: time="2025-07-11T05:25:53.204130956Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 05:25:53.205724 containerd[1591]: time="2025-07-11T05:25:53.205693735Z" level=info msg="CreateContainer within sandbox \"411c733d961d8df2a568c077c6c26c9237b0246373955675109298619576c376\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 05:25:53.213634 containerd[1591]: time="2025-07-11T05:25:53.213587660Z" level=info msg="Container 14fed3e7e2395373007d6b71b189455e1099b4223f07cc4a0f4cb221801e77b2: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:25:53.218602 containerd[1591]: time="2025-07-11T05:25:53.218543096Z" level=info msg="CreateContainer within sandbox \"411c733d961d8df2a568c077c6c26c9237b0246373955675109298619576c376\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"14fed3e7e2395373007d6b71b189455e1099b4223f07cc4a0f4cb221801e77b2\"" Jul 11 05:25:53.219122 containerd[1591]: time="2025-07-11T05:25:53.219091270Z" level=info msg="StartContainer for \"14fed3e7e2395373007d6b71b189455e1099b4223f07cc4a0f4cb221801e77b2\"" Jul 11 05:25:53.219863 containerd[1591]: time="2025-07-11T05:25:53.219805943Z" level=info msg="connecting to shim 14fed3e7e2395373007d6b71b189455e1099b4223f07cc4a0f4cb221801e77b2" address="unix:///run/containerd/s/6772dd65cd8c1a902a298729781a3be4b4d54bf7316a0a8b3e1bb9573650140b" protocol=ttrpc version=3 Jul 11 05:25:53.269865 systemd[1]: Started cri-containerd-14fed3e7e2395373007d6b71b189455e1099b4223f07cc4a0f4cb221801e77b2.scope - libcontainer container 14fed3e7e2395373007d6b71b189455e1099b4223f07cc4a0f4cb221801e77b2. Jul 11 05:25:53.372002 containerd[1591]: time="2025-07-11T05:25:53.371903546Z" level=info msg="StartContainer for \"14fed3e7e2395373007d6b71b189455e1099b4223f07cc4a0f4cb221801e77b2\" returns successfully" Jul 11 05:25:54.386021 kubelet[2738]: I0711 05:25:54.385923 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-77l84" podStartSLOduration=2.146312868 podStartE2EDuration="4.385902554s" podCreationTimestamp="2025-07-11 05:25:50 +0000 UTC" firstStartedPulling="2025-07-11 05:25:50.965181269 +0000 UTC m=+5.725370308" lastFinishedPulling="2025-07-11 05:25:53.204770946 +0000 UTC m=+7.964959994" observedRunningTime="2025-07-11 05:25:54.385495328 +0000 UTC m=+9.145684376" watchObservedRunningTime="2025-07-11 05:25:54.385902554 +0000 UTC m=+9.146091602" Jul 11 05:25:57.525158 update_engine[1579]: I20250711 05:25:57.525043 1579 update_attempter.cc:509] Updating boot flags... Jul 11 05:25:59.071865 sudo[1806]: pam_unix(sudo:session): session closed for user root Jul 11 05:25:59.074141 sshd[1805]: Connection closed by 10.0.0.1 port 55752 Jul 11 05:25:59.075114 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jul 11 05:25:59.081825 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:55752.service: Deactivated successfully. Jul 11 05:25:59.086861 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 05:25:59.088279 systemd[1]: session-7.scope: Consumed 5.122s CPU time, 227.9M memory peak. Jul 11 05:25:59.090740 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Jul 11 05:25:59.095649 systemd-logind[1578]: Removed session 7. Jul 11 05:26:01.512518 systemd[1]: Created slice kubepods-besteffort-pode789a7f1_a709_4423_8003_55d3dccb5ab6.slice - libcontainer container kubepods-besteffort-pode789a7f1_a709_4423_8003_55d3dccb5ab6.slice. Jul 11 05:26:01.533184 kubelet[2738]: I0711 05:26:01.533132 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djfg\" (UniqueName: \"kubernetes.io/projected/e789a7f1-a709-4423-8003-55d3dccb5ab6-kube-api-access-8djfg\") pod \"calico-typha-57c4df8565-8vvfr\" (UID: \"e789a7f1-a709-4423-8003-55d3dccb5ab6\") " pod="calico-system/calico-typha-57c4df8565-8vvfr" Jul 11 05:26:01.534424 kubelet[2738]: I0711 05:26:01.534254 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e789a7f1-a709-4423-8003-55d3dccb5ab6-typha-certs\") pod \"calico-typha-57c4df8565-8vvfr\" (UID: \"e789a7f1-a709-4423-8003-55d3dccb5ab6\") " pod="calico-system/calico-typha-57c4df8565-8vvfr" Jul 11 05:26:01.534424 kubelet[2738]: I0711 05:26:01.534367 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e789a7f1-a709-4423-8003-55d3dccb5ab6-tigera-ca-bundle\") pod \"calico-typha-57c4df8565-8vvfr\" (UID: \"e789a7f1-a709-4423-8003-55d3dccb5ab6\") " pod="calico-system/calico-typha-57c4df8565-8vvfr" Jul 11 05:26:01.820100 containerd[1591]: time="2025-07-11T05:26:01.819961259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c4df8565-8vvfr,Uid:e789a7f1-a709-4423-8003-55d3dccb5ab6,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:01.936855 systemd[1]: Created slice kubepods-besteffort-pode74db243_d4bf_4acd_aa80_7bdcc2999025.slice - libcontainer container kubepods-besteffort-pode74db243_d4bf_4acd_aa80_7bdcc2999025.slice. Jul 11 05:26:01.942658 containerd[1591]: time="2025-07-11T05:26:01.942582745Z" level=info msg="connecting to shim 9d86264ebf6397893d9b9aef92e8d302194a412261a9cbbb463d76aacef9622d" address="unix:///run/containerd/s/3883876b551b54d2bbcb2076608511ac4a501f0a9b6e6538d3dd771f0f3a4ada" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:01.943070 kubelet[2738]: I0711 05:26:01.942817 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-xtables-lock\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943070 kubelet[2738]: I0711 05:26:01.942922 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-lib-modules\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943070 kubelet[2738]: I0711 05:26:01.942951 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-policysync\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943070 kubelet[2738]: I0711 05:26:01.942976 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-cni-log-dir\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943070 kubelet[2738]: I0711 05:26:01.943000 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-cni-net-dir\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943271 kubelet[2738]: I0711 05:26:01.943036 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-cni-bin-dir\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943271 kubelet[2738]: I0711 05:26:01.943059 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e74db243-d4bf-4acd-aa80-7bdcc2999025-node-certs\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943271 kubelet[2738]: I0711 05:26:01.943084 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptxg\" (UniqueName: \"kubernetes.io/projected/e74db243-d4bf-4acd-aa80-7bdcc2999025-kube-api-access-wptxg\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943271 kubelet[2738]: I0711 05:26:01.943109 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74db243-d4bf-4acd-aa80-7bdcc2999025-tigera-ca-bundle\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943271 kubelet[2738]: I0711 05:26:01.943130 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-var-lib-calico\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943425 kubelet[2738]: I0711 05:26:01.943150 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-var-run-calico\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.943425 kubelet[2738]: I0711 05:26:01.943173 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e74db243-d4bf-4acd-aa80-7bdcc2999025-flexvol-driver-host\") pod \"calico-node-wsjmd\" (UID: \"e74db243-d4bf-4acd-aa80-7bdcc2999025\") " pod="calico-system/calico-node-wsjmd" Jul 11 05:26:01.979054 systemd[1]: Started cri-containerd-9d86264ebf6397893d9b9aef92e8d302194a412261a9cbbb463d76aacef9622d.scope - libcontainer container 9d86264ebf6397893d9b9aef92e8d302194a412261a9cbbb463d76aacef9622d. Jul 11 05:26:02.036915 containerd[1591]: time="2025-07-11T05:26:02.036864649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57c4df8565-8vvfr,Uid:e789a7f1-a709-4423-8003-55d3dccb5ab6,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d86264ebf6397893d9b9aef92e8d302194a412261a9cbbb463d76aacef9622d\"" Jul 11 05:26:02.038582 containerd[1591]: time="2025-07-11T05:26:02.038484704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 05:26:02.045418 kubelet[2738]: E0711 05:26:02.045209 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.045418 kubelet[2738]: W0711 05:26:02.045231 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.045418 kubelet[2738]: E0711 05:26:02.045294 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.045804 kubelet[2738]: E0711 05:26:02.045787 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.045951 kubelet[2738]: W0711 05:26:02.045901 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.045951 kubelet[2738]: E0711 05:26:02.045930 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.048819 kubelet[2738]: E0711 05:26:02.048789 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.048819 kubelet[2738]: W0711 05:26:02.048809 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.048949 kubelet[2738]: E0711 05:26:02.048827 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.053936 kubelet[2738]: E0711 05:26:02.053913 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.053936 kubelet[2738]: W0711 05:26:02.053927 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.053936 kubelet[2738]: E0711 05:26:02.053938 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.134445 kubelet[2738]: E0711 05:26:02.134182 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:02.232851 kubelet[2738]: E0711 05:26:02.232806 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.232851 kubelet[2738]: W0711 05:26:02.232832 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.232851 kubelet[2738]: E0711 05:26:02.232858 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.233088 kubelet[2738]: E0711 05:26:02.233058 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.233088 kubelet[2738]: W0711 05:26:02.233067 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.233088 kubelet[2738]: E0711 05:26:02.233077 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.233278 kubelet[2738]: E0711 05:26:02.233258 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.233278 kubelet[2738]: W0711 05:26:02.233271 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.233333 kubelet[2738]: E0711 05:26:02.233284 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.233566 kubelet[2738]: E0711 05:26:02.233538 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.233566 kubelet[2738]: W0711 05:26:02.233552 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.233566 kubelet[2738]: E0711 05:26:02.233563 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.233797 kubelet[2738]: E0711 05:26:02.233780 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.233797 kubelet[2738]: W0711 05:26:02.233793 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.233866 kubelet[2738]: E0711 05:26:02.233807 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.234002 kubelet[2738]: E0711 05:26:02.233985 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.234002 kubelet[2738]: W0711 05:26:02.233997 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.234057 kubelet[2738]: E0711 05:26:02.234007 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.234225 kubelet[2738]: E0711 05:26:02.234193 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.234225 kubelet[2738]: W0711 05:26:02.234208 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.234225 kubelet[2738]: E0711 05:26:02.234218 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.234459 kubelet[2738]: E0711 05:26:02.234442 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.234459 kubelet[2738]: W0711 05:26:02.234454 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.234505 kubelet[2738]: E0711 05:26:02.234466 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.234688 kubelet[2738]: E0711 05:26:02.234672 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.234688 kubelet[2738]: W0711 05:26:02.234685 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.234755 kubelet[2738]: E0711 05:26:02.234695 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.234905 kubelet[2738]: E0711 05:26:02.234889 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.234905 kubelet[2738]: W0711 05:26:02.234901 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.234955 kubelet[2738]: E0711 05:26:02.234914 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.235121 kubelet[2738]: E0711 05:26:02.235104 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.235121 kubelet[2738]: W0711 05:26:02.235116 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.235168 kubelet[2738]: E0711 05:26:02.235126 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.235324 kubelet[2738]: E0711 05:26:02.235308 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.235324 kubelet[2738]: W0711 05:26:02.235320 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.235373 kubelet[2738]: E0711 05:26:02.235330 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.235526 kubelet[2738]: E0711 05:26:02.235510 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.235526 kubelet[2738]: W0711 05:26:02.235523 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.235570 kubelet[2738]: E0711 05:26:02.235532 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.235707 kubelet[2738]: E0711 05:26:02.235692 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.235707 kubelet[2738]: W0711 05:26:02.235704 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.235761 kubelet[2738]: E0711 05:26:02.235713 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.235925 kubelet[2738]: E0711 05:26:02.235909 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.235925 kubelet[2738]: W0711 05:26:02.235923 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.235970 kubelet[2738]: E0711 05:26:02.235933 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.236127 kubelet[2738]: E0711 05:26:02.236109 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.236127 kubelet[2738]: W0711 05:26:02.236122 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.236180 kubelet[2738]: E0711 05:26:02.236131 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.236333 kubelet[2738]: E0711 05:26:02.236318 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.236333 kubelet[2738]: W0711 05:26:02.236330 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.236379 kubelet[2738]: E0711 05:26:02.236339 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.236514 kubelet[2738]: E0711 05:26:02.236498 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.236514 kubelet[2738]: W0711 05:26:02.236510 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.236557 kubelet[2738]: E0711 05:26:02.236520 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.236689 kubelet[2738]: E0711 05:26:02.236673 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.236689 kubelet[2738]: W0711 05:26:02.236688 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.236749 kubelet[2738]: E0711 05:26:02.236697 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.236922 kubelet[2738]: E0711 05:26:02.236904 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.236922 kubelet[2738]: W0711 05:26:02.236917 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.236970 kubelet[2738]: E0711 05:26:02.236928 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.246252 kubelet[2738]: E0711 05:26:02.246224 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.246252 kubelet[2738]: W0711 05:26:02.246241 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.246314 kubelet[2738]: E0711 05:26:02.246255 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.246314 kubelet[2738]: I0711 05:26:02.246284 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv4vv\" (UniqueName: \"kubernetes.io/projected/a236dc34-4c45-4ad9-84b1-58388e163ac6-kube-api-access-rv4vv\") pod \"csi-node-driver-2m4jb\" (UID: \"a236dc34-4c45-4ad9-84b1-58388e163ac6\") " pod="calico-system/csi-node-driver-2m4jb" Jul 11 05:26:02.246556 kubelet[2738]: E0711 05:26:02.246536 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.246556 kubelet[2738]: W0711 05:26:02.246550 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.246617 kubelet[2738]: E0711 05:26:02.246568 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.246617 kubelet[2738]: I0711 05:26:02.246585 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a236dc34-4c45-4ad9-84b1-58388e163ac6-varrun\") pod \"csi-node-driver-2m4jb\" (UID: \"a236dc34-4c45-4ad9-84b1-58388e163ac6\") " pod="calico-system/csi-node-driver-2m4jb" Jul 11 05:26:02.246899 kubelet[2738]: E0711 05:26:02.246859 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.246899 kubelet[2738]: W0711 05:26:02.246883 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.247110 kubelet[2738]: E0711 05:26:02.246918 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.247110 kubelet[2738]: I0711 05:26:02.246952 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a236dc34-4c45-4ad9-84b1-58388e163ac6-kubelet-dir\") pod \"csi-node-driver-2m4jb\" (UID: \"a236dc34-4c45-4ad9-84b1-58388e163ac6\") " pod="calico-system/csi-node-driver-2m4jb" Jul 11 05:26:02.247243 kubelet[2738]: E0711 05:26:02.247224 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.247243 kubelet[2738]: W0711 05:26:02.247236 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.247302 kubelet[2738]: E0711 05:26:02.247250 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.247302 kubelet[2738]: I0711 05:26:02.247265 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a236dc34-4c45-4ad9-84b1-58388e163ac6-registration-dir\") pod \"csi-node-driver-2m4jb\" (UID: \"a236dc34-4c45-4ad9-84b1-58388e163ac6\") " pod="calico-system/csi-node-driver-2m4jb" Jul 11 05:26:02.247496 kubelet[2738]: E0711 05:26:02.247468 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.247496 kubelet[2738]: W0711 05:26:02.247484 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.247565 kubelet[2738]: E0711 05:26:02.247502 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.247699 kubelet[2738]: E0711 05:26:02.247682 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.247699 kubelet[2738]: W0711 05:26:02.247694 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.247806 kubelet[2738]: E0711 05:26:02.247709 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.247939 kubelet[2738]: E0711 05:26:02.247920 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.247939 kubelet[2738]: W0711 05:26:02.247933 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.248013 kubelet[2738]: E0711 05:26:02.247949 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.248148 kubelet[2738]: E0711 05:26:02.248130 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.248148 kubelet[2738]: W0711 05:26:02.248142 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.248210 kubelet[2738]: E0711 05:26:02.248157 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.248369 kubelet[2738]: E0711 05:26:02.248350 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.248369 kubelet[2738]: W0711 05:26:02.248362 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.248436 kubelet[2738]: E0711 05:26:02.248379 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.248591 kubelet[2738]: E0711 05:26:02.248571 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.248591 kubelet[2738]: W0711 05:26:02.248585 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.248650 kubelet[2738]: E0711 05:26:02.248627 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.248856 kubelet[2738]: E0711 05:26:02.248830 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.248856 kubelet[2738]: W0711 05:26:02.248847 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.248951 kubelet[2738]: E0711 05:26:02.248878 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.249087 kubelet[2738]: E0711 05:26:02.249069 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.249087 kubelet[2738]: W0711 05:26:02.249082 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.249131 kubelet[2738]: E0711 05:26:02.249100 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.249131 kubelet[2738]: I0711 05:26:02.249122 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a236dc34-4c45-4ad9-84b1-58388e163ac6-socket-dir\") pod \"csi-node-driver-2m4jb\" (UID: \"a236dc34-4c45-4ad9-84b1-58388e163ac6\") " pod="calico-system/csi-node-driver-2m4jb" Jul 11 05:26:02.249342 kubelet[2738]: E0711 05:26:02.249316 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.249342 kubelet[2738]: W0711 05:26:02.249332 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.249342 kubelet[2738]: E0711 05:26:02.249344 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.249539 kubelet[2738]: E0711 05:26:02.249519 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.249539 kubelet[2738]: W0711 05:26:02.249532 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.249622 kubelet[2738]: E0711 05:26:02.249542 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.249743 kubelet[2738]: E0711 05:26:02.249713 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.249743 kubelet[2738]: W0711 05:26:02.249722 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.249818 kubelet[2738]: E0711 05:26:02.249747 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.252771 containerd[1591]: time="2025-07-11T05:26:02.252685089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wsjmd,Uid:e74db243-d4bf-4acd-aa80-7bdcc2999025,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:02.280384 containerd[1591]: time="2025-07-11T05:26:02.280266363Z" level=info msg="connecting to shim e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039" address="unix:///run/containerd/s/669e33e5383caaa1d6ce0dc5ac63e7aa79b9adac8c3b8ae28ab64a0d2f42d545" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:02.310916 systemd[1]: Started cri-containerd-e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039.scope - libcontainer container e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039. Jul 11 05:26:02.340657 containerd[1591]: time="2025-07-11T05:26:02.340610996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wsjmd,Uid:e74db243-d4bf-4acd-aa80-7bdcc2999025,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039\"" Jul 11 05:26:02.350099 kubelet[2738]: E0711 05:26:02.350070 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.350099 kubelet[2738]: W0711 05:26:02.350093 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.350253 kubelet[2738]: E0711 05:26:02.350117 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.350396 kubelet[2738]: E0711 05:26:02.350371 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.350396 kubelet[2738]: W0711 05:26:02.350385 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.350448 kubelet[2738]: E0711 05:26:02.350402 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.350686 kubelet[2738]: E0711 05:26:02.350658 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.350716 kubelet[2738]: W0711 05:26:02.350683 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.350765 kubelet[2738]: E0711 05:26:02.350716 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.350945 kubelet[2738]: E0711 05:26:02.350929 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.350945 kubelet[2738]: W0711 05:26:02.350940 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.350998 kubelet[2738]: E0711 05:26:02.350954 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.351186 kubelet[2738]: E0711 05:26:02.351169 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.351186 kubelet[2738]: W0711 05:26:02.351179 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.351267 kubelet[2738]: E0711 05:26:02.351191 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.351411 kubelet[2738]: E0711 05:26:02.351396 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.351411 kubelet[2738]: W0711 05:26:02.351406 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.351466 kubelet[2738]: E0711 05:26:02.351418 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.351618 kubelet[2738]: E0711 05:26:02.351604 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.351618 kubelet[2738]: W0711 05:26:02.351614 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.351686 kubelet[2738]: E0711 05:26:02.351628 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.352030 kubelet[2738]: E0711 05:26:02.351979 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.352030 kubelet[2738]: W0711 05:26:02.352000 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.352202 kubelet[2738]: E0711 05:26:02.352043 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.352226 kubelet[2738]: E0711 05:26:02.352219 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.352248 kubelet[2738]: W0711 05:26:02.352227 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.352248 kubelet[2738]: E0711 05:26:02.352243 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.352449 kubelet[2738]: E0711 05:26:02.352387 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.352449 kubelet[2738]: W0711 05:26:02.352394 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.352449 kubelet[2738]: E0711 05:26:02.352408 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.352674 kubelet[2738]: E0711 05:26:02.352656 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.352674 kubelet[2738]: W0711 05:26:02.352670 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.352815 kubelet[2738]: E0711 05:26:02.352685 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.352905 kubelet[2738]: E0711 05:26:02.352890 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.352905 kubelet[2738]: W0711 05:26:02.352900 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.352959 kubelet[2738]: E0711 05:26:02.352914 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.353147 kubelet[2738]: E0711 05:26:02.353132 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.353147 kubelet[2738]: W0711 05:26:02.353142 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.353215 kubelet[2738]: E0711 05:26:02.353175 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.353317 kubelet[2738]: E0711 05:26:02.353299 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.353317 kubelet[2738]: W0711 05:26:02.353310 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.353385 kubelet[2738]: E0711 05:26:02.353364 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.353528 kubelet[2738]: E0711 05:26:02.353514 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.353528 kubelet[2738]: W0711 05:26:02.353526 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.353591 kubelet[2738]: E0711 05:26:02.353538 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.353753 kubelet[2738]: E0711 05:26:02.353722 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.353753 kubelet[2738]: W0711 05:26:02.353749 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.353839 kubelet[2738]: E0711 05:26:02.353763 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.353958 kubelet[2738]: E0711 05:26:02.353941 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.353958 kubelet[2738]: W0711 05:26:02.353952 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.354047 kubelet[2738]: E0711 05:26:02.353964 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.354230 kubelet[2738]: E0711 05:26:02.354206 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.354230 kubelet[2738]: W0711 05:26:02.354219 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.354297 kubelet[2738]: E0711 05:26:02.354278 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.354538 kubelet[2738]: E0711 05:26:02.354522 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.354538 kubelet[2738]: W0711 05:26:02.354534 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.354629 kubelet[2738]: E0711 05:26:02.354549 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.354759 kubelet[2738]: E0711 05:26:02.354739 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.354759 kubelet[2738]: W0711 05:26:02.354752 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.354845 kubelet[2738]: E0711 05:26:02.354766 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.354978 kubelet[2738]: E0711 05:26:02.354940 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.354978 kubelet[2738]: W0711 05:26:02.354952 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.355060 kubelet[2738]: E0711 05:26:02.354983 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.355360 kubelet[2738]: E0711 05:26:02.355130 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.355360 kubelet[2738]: W0711 05:26:02.355143 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.355360 kubelet[2738]: E0711 05:26:02.355157 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.355360 kubelet[2738]: E0711 05:26:02.355314 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.355360 kubelet[2738]: W0711 05:26:02.355321 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.355360 kubelet[2738]: E0711 05:26:02.355332 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.355549 kubelet[2738]: E0711 05:26:02.355522 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.355549 kubelet[2738]: W0711 05:26:02.355538 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.355597 kubelet[2738]: E0711 05:26:02.355552 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.356762 kubelet[2738]: E0711 05:26:02.355969 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.356762 kubelet[2738]: W0711 05:26:02.355983 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.356762 kubelet[2738]: E0711 05:26:02.355994 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:02.364065 kubelet[2738]: E0711 05:26:02.364032 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:02.364065 kubelet[2738]: W0711 05:26:02.364056 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:02.364065 kubelet[2738]: E0711 05:26:02.364076 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:03.571639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211505604.mount: Deactivated successfully. Jul 11 05:26:04.342087 kubelet[2738]: E0711 05:26:04.342044 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:04.699813 containerd[1591]: time="2025-07-11T05:26:04.699658395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:04.700442 containerd[1591]: time="2025-07-11T05:26:04.700400858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 05:26:04.701655 containerd[1591]: time="2025-07-11T05:26:04.701610986Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:04.703807 containerd[1591]: time="2025-07-11T05:26:04.703767943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:04.704223 containerd[1591]: time="2025-07-11T05:26:04.704191594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.665659049s" Jul 11 05:26:04.704223 containerd[1591]: time="2025-07-11T05:26:04.704218575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 05:26:04.705214 containerd[1591]: time="2025-07-11T05:26:04.705170364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 05:26:04.713758 containerd[1591]: time="2025-07-11T05:26:04.713636187Z" level=info msg="CreateContainer within sandbox \"9d86264ebf6397893d9b9aef92e8d302194a412261a9cbbb463d76aacef9622d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 05:26:04.722068 containerd[1591]: time="2025-07-11T05:26:04.722014043Z" level=info msg="Container 50dd0194753bb4a571ea33dc1d63ea8032435f064c83017379724e6522e23293: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:04.729747 containerd[1591]: time="2025-07-11T05:26:04.729684292Z" level=info msg="CreateContainer within sandbox \"9d86264ebf6397893d9b9aef92e8d302194a412261a9cbbb463d76aacef9622d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"50dd0194753bb4a571ea33dc1d63ea8032435f064c83017379724e6522e23293\"" Jul 11 05:26:04.730276 containerd[1591]: time="2025-07-11T05:26:04.730244161Z" level=info msg="StartContainer for \"50dd0194753bb4a571ea33dc1d63ea8032435f064c83017379724e6522e23293\"" Jul 11 05:26:04.731356 containerd[1591]: time="2025-07-11T05:26:04.731321366Z" level=info msg="connecting to shim 50dd0194753bb4a571ea33dc1d63ea8032435f064c83017379724e6522e23293" address="unix:///run/containerd/s/3883876b551b54d2bbcb2076608511ac4a501f0a9b6e6538d3dd771f0f3a4ada" protocol=ttrpc version=3 Jul 11 05:26:04.754869 systemd[1]: Started cri-containerd-50dd0194753bb4a571ea33dc1d63ea8032435f064c83017379724e6522e23293.scope - libcontainer container 50dd0194753bb4a571ea33dc1d63ea8032435f064c83017379724e6522e23293. Jul 11 05:26:04.806881 containerd[1591]: time="2025-07-11T05:26:04.806838384Z" level=info msg="StartContainer for \"50dd0194753bb4a571ea33dc1d63ea8032435f064c83017379724e6522e23293\" returns successfully" Jul 11 05:26:05.457377 kubelet[2738]: E0711 05:26:05.457323 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.457377 kubelet[2738]: W0711 05:26:05.457346 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.457377 kubelet[2738]: E0711 05:26:05.457369 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.457885 kubelet[2738]: E0711 05:26:05.457597 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.457885 kubelet[2738]: W0711 05:26:05.457606 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.457885 kubelet[2738]: E0711 05:26:05.457615 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.457885 kubelet[2738]: E0711 05:26:05.457853 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.457885 kubelet[2738]: W0711 05:26:05.457861 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.457885 kubelet[2738]: E0711 05:26:05.457870 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.458124 kubelet[2738]: E0711 05:26:05.458097 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.458124 kubelet[2738]: W0711 05:26:05.458110 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.458124 kubelet[2738]: E0711 05:26:05.458121 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.458316 kubelet[2738]: E0711 05:26:05.458300 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.458316 kubelet[2738]: W0711 05:26:05.458312 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.458371 kubelet[2738]: E0711 05:26:05.458324 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.458521 kubelet[2738]: E0711 05:26:05.458505 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.458521 kubelet[2738]: W0711 05:26:05.458517 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.458568 kubelet[2738]: E0711 05:26:05.458526 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.458683 kubelet[2738]: E0711 05:26:05.458665 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.458683 kubelet[2738]: W0711 05:26:05.458676 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.458683 kubelet[2738]: E0711 05:26:05.458684 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.458867 kubelet[2738]: E0711 05:26:05.458852 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.458867 kubelet[2738]: W0711 05:26:05.458862 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.458918 kubelet[2738]: E0711 05:26:05.458870 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.459076 kubelet[2738]: E0711 05:26:05.459052 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.459076 kubelet[2738]: W0711 05:26:05.459064 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.459076 kubelet[2738]: E0711 05:26:05.459071 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.459209 kubelet[2738]: E0711 05:26:05.459195 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.459209 kubelet[2738]: W0711 05:26:05.459204 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.459259 kubelet[2738]: E0711 05:26:05.459212 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.459347 kubelet[2738]: E0711 05:26:05.459334 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.459347 kubelet[2738]: W0711 05:26:05.459343 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.459401 kubelet[2738]: E0711 05:26:05.459350 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.459484 kubelet[2738]: E0711 05:26:05.459470 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.459484 kubelet[2738]: W0711 05:26:05.459481 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.459531 kubelet[2738]: E0711 05:26:05.459489 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.459632 kubelet[2738]: E0711 05:26:05.459618 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.459632 kubelet[2738]: W0711 05:26:05.459628 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.459683 kubelet[2738]: E0711 05:26:05.459635 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.459808 kubelet[2738]: E0711 05:26:05.459794 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.459808 kubelet[2738]: W0711 05:26:05.459803 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.459868 kubelet[2738]: E0711 05:26:05.459811 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.459973 kubelet[2738]: E0711 05:26:05.459959 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.459973 kubelet[2738]: W0711 05:26:05.459968 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.460032 kubelet[2738]: E0711 05:26:05.459975 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.476405 kubelet[2738]: E0711 05:26:05.476370 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.476405 kubelet[2738]: W0711 05:26:05.476393 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.476485 kubelet[2738]: E0711 05:26:05.476414 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.476687 kubelet[2738]: E0711 05:26:05.476665 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.476687 kubelet[2738]: W0711 05:26:05.476677 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.476767 kubelet[2738]: E0711 05:26:05.476693 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.476913 kubelet[2738]: E0711 05:26:05.476890 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.476913 kubelet[2738]: W0711 05:26:05.476902 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.476963 kubelet[2738]: E0711 05:26:05.476916 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.477137 kubelet[2738]: E0711 05:26:05.477121 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.477137 kubelet[2738]: W0711 05:26:05.477131 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.477200 kubelet[2738]: E0711 05:26:05.477144 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.477348 kubelet[2738]: E0711 05:26:05.477333 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.477348 kubelet[2738]: W0711 05:26:05.477343 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.477394 kubelet[2738]: E0711 05:26:05.477358 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.477538 kubelet[2738]: E0711 05:26:05.477523 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.477538 kubelet[2738]: W0711 05:26:05.477532 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.477597 kubelet[2738]: E0711 05:26:05.477546 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.477769 kubelet[2738]: E0711 05:26:05.477752 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.477769 kubelet[2738]: W0711 05:26:05.477763 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.477852 kubelet[2738]: E0711 05:26:05.477804 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.478057 kubelet[2738]: E0711 05:26:05.478024 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.478081 kubelet[2738]: W0711 05:26:05.478049 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.478111 kubelet[2738]: E0711 05:26:05.478087 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.478255 kubelet[2738]: E0711 05:26:05.478242 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.478255 kubelet[2738]: W0711 05:26:05.478251 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.478306 kubelet[2738]: E0711 05:26:05.478279 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.478417 kubelet[2738]: E0711 05:26:05.478406 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.478417 kubelet[2738]: W0711 05:26:05.478414 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.478465 kubelet[2738]: E0711 05:26:05.478434 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.478626 kubelet[2738]: E0711 05:26:05.478615 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.478626 kubelet[2738]: W0711 05:26:05.478624 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.478671 kubelet[2738]: E0711 05:26:05.478635 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.478827 kubelet[2738]: E0711 05:26:05.478816 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.478827 kubelet[2738]: W0711 05:26:05.478825 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.478878 kubelet[2738]: E0711 05:26:05.478836 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.479038 kubelet[2738]: E0711 05:26:05.479026 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.479038 kubelet[2738]: W0711 05:26:05.479034 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.479090 kubelet[2738]: E0711 05:26:05.479047 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.479319 kubelet[2738]: E0711 05:26:05.479301 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.479319 kubelet[2738]: W0711 05:26:05.479314 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.479376 kubelet[2738]: E0711 05:26:05.479325 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.479498 kubelet[2738]: E0711 05:26:05.479486 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.479522 kubelet[2738]: W0711 05:26:05.479498 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.479522 kubelet[2738]: E0711 05:26:05.479511 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.479688 kubelet[2738]: E0711 05:26:05.479677 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.479688 kubelet[2738]: W0711 05:26:05.479685 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.479742 kubelet[2738]: E0711 05:26:05.479693 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.479887 kubelet[2738]: E0711 05:26:05.479876 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.479887 kubelet[2738]: W0711 05:26:05.479885 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.479927 kubelet[2738]: E0711 05:26:05.479892 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.480235 kubelet[2738]: E0711 05:26:05.480216 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:26:05.480235 kubelet[2738]: W0711 05:26:05.480226 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:26:05.480235 kubelet[2738]: E0711 05:26:05.480234 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:26:05.957324 containerd[1591]: time="2025-07-11T05:26:05.957272670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:05.958284 containerd[1591]: time="2025-07-11T05:26:05.958207427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 05:26:05.959560 containerd[1591]: time="2025-07-11T05:26:05.959525006Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:05.961622 containerd[1591]: time="2025-07-11T05:26:05.961587913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:05.962484 containerd[1591]: time="2025-07-11T05:26:05.962430144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.257222169s" Jul 11 05:26:05.962484 containerd[1591]: time="2025-07-11T05:26:05.962466924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 05:26:05.964375 containerd[1591]: time="2025-07-11T05:26:05.964348018Z" level=info msg="CreateContainer within sandbox \"e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 05:26:05.973890 containerd[1591]: time="2025-07-11T05:26:05.973845792Z" level=info msg="Container d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:05.983668 containerd[1591]: time="2025-07-11T05:26:05.983619538Z" level=info msg="CreateContainer within sandbox \"e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1\"" Jul 11 05:26:05.984107 containerd[1591]: time="2025-07-11T05:26:05.984084006Z" level=info msg="StartContainer for \"d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1\"" Jul 11 05:26:05.985620 containerd[1591]: time="2025-07-11T05:26:05.985588247Z" level=info msg="connecting to shim d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1" address="unix:///run/containerd/s/669e33e5383caaa1d6ce0dc5ac63e7aa79b9adac8c3b8ae28ab64a0d2f42d545" protocol=ttrpc version=3 Jul 11 05:26:06.009952 systemd[1]: Started cri-containerd-d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1.scope - libcontainer container d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1. Jul 11 05:26:06.057398 containerd[1591]: time="2025-07-11T05:26:06.057236462Z" level=info msg="StartContainer for \"d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1\" returns successfully" Jul 11 05:26:06.066745 systemd[1]: cri-containerd-d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1.scope: Deactivated successfully. Jul 11 05:26:06.068403 containerd[1591]: time="2025-07-11T05:26:06.068363394Z" level=info msg="received exit event container_id:\"d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1\" id:\"d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1\" pid:3440 exited_at:{seconds:1752211566 nanos:67882347}" Jul 11 05:26:06.068497 containerd[1591]: time="2025-07-11T05:26:06.068454116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1\" id:\"d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1\" pid:3440 exited_at:{seconds:1752211566 nanos:67882347}" Jul 11 05:26:06.090503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7ea8f7591d5a1ac7cd7610695c72b6859607cc53df1d630774221efd38ed3a1-rootfs.mount: Deactivated successfully. Jul 11 05:26:06.342836 kubelet[2738]: E0711 05:26:06.342763 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:06.402140 kubelet[2738]: I0711 05:26:06.402098 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:26:06.416020 kubelet[2738]: I0711 05:26:06.415628 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57c4df8565-8vvfr" podStartSLOduration=2.748769235 podStartE2EDuration="5.415603938s" podCreationTimestamp="2025-07-11 05:26:01 +0000 UTC" firstStartedPulling="2025-07-11 05:26:02.038163606 +0000 UTC m=+16.798352664" lastFinishedPulling="2025-07-11 05:26:04.704998319 +0000 UTC m=+19.465187367" observedRunningTime="2025-07-11 05:26:05.407894928 +0000 UTC m=+20.168083976" watchObservedRunningTime="2025-07-11 05:26:06.415603938 +0000 UTC m=+21.175792986" Jul 11 05:26:07.406534 containerd[1591]: time="2025-07-11T05:26:07.406449973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 05:26:08.342525 kubelet[2738]: E0711 05:26:08.342460 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:10.342615 kubelet[2738]: E0711 05:26:10.342548 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:12.342597 kubelet[2738]: E0711 05:26:12.342540 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:13.384379 containerd[1591]: time="2025-07-11T05:26:13.384321999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:13.385093 containerd[1591]: time="2025-07-11T05:26:13.385036835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 05:26:13.386241 containerd[1591]: time="2025-07-11T05:26:13.386198363Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:13.388576 containerd[1591]: time="2025-07-11T05:26:13.388541106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:13.389356 containerd[1591]: time="2025-07-11T05:26:13.389327788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.982795449s" Jul 11 05:26:13.389397 containerd[1591]: time="2025-07-11T05:26:13.389353646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 05:26:13.391376 containerd[1591]: time="2025-07-11T05:26:13.391335439Z" level=info msg="CreateContainer within sandbox \"e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 05:26:13.401064 containerd[1591]: time="2025-07-11T05:26:13.401014520Z" level=info msg="Container f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:13.410697 containerd[1591]: time="2025-07-11T05:26:13.410662302Z" level=info msg="CreateContainer within sandbox \"e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84\"" Jul 11 05:26:13.411235 containerd[1591]: time="2025-07-11T05:26:13.411202229Z" level=info msg="StartContainer for \"f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84\"" Jul 11 05:26:13.412474 containerd[1591]: time="2025-07-11T05:26:13.412437225Z" level=info msg="connecting to shim f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84" address="unix:///run/containerd/s/669e33e5383caaa1d6ce0dc5ac63e7aa79b9adac8c3b8ae28ab64a0d2f42d545" protocol=ttrpc version=3 Jul 11 05:26:13.438913 systemd[1]: Started cri-containerd-f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84.scope - libcontainer container f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84. Jul 11 05:26:13.480815 containerd[1591]: time="2025-07-11T05:26:13.480773729Z" level=info msg="StartContainer for \"f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84\" returns successfully" Jul 11 05:26:14.342393 kubelet[2738]: E0711 05:26:14.342318 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:14.842772 containerd[1591]: time="2025-07-11T05:26:14.842666986Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 05:26:14.845357 systemd[1]: cri-containerd-f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84.scope: Deactivated successfully. Jul 11 05:26:14.845885 systemd[1]: cri-containerd-f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84.scope: Consumed 611ms CPU time, 180.2M memory peak, 3.4M read from disk, 171.2M written to disk. Jul 11 05:26:14.847380 containerd[1591]: time="2025-07-11T05:26:14.847344884Z" level=info msg="received exit event container_id:\"f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84\" id:\"f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84\" pid:3502 exited_at:{seconds:1752211574 nanos:847114470}" Jul 11 05:26:14.847525 containerd[1591]: time="2025-07-11T05:26:14.847486410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84\" id:\"f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84\" pid:3502 exited_at:{seconds:1752211574 nanos:847114470}" Jul 11 05:26:14.872535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f35de983dcedbda11ded8bdcccba8233b2942ea690dc43924411c53e4683dc84-rootfs.mount: Deactivated successfully. Jul 11 05:26:14.907853 kubelet[2738]: I0711 05:26:14.907801 2738 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 05:26:14.950774 systemd[1]: Created slice kubepods-burstable-pode6ef4965_81f9_473f_8191_161880640e2e.slice - libcontainer container kubepods-burstable-pode6ef4965_81f9_473f_8191_161880640e2e.slice. Jul 11 05:26:14.967513 systemd[1]: Created slice kubepods-besteffort-pod31b0dd7f_d0d7_4755_b27e_2f781cc6e274.slice - libcontainer container kubepods-besteffort-pod31b0dd7f_d0d7_4755_b27e_2f781cc6e274.slice. Jul 11 05:26:14.974280 systemd[1]: Created slice kubepods-burstable-pod6d43cbae_800f_4501_b753_018adbf45af6.slice - libcontainer container kubepods-burstable-pod6d43cbae_800f_4501_b753_018adbf45af6.slice. Jul 11 05:26:14.979656 systemd[1]: Created slice kubepods-besteffort-pod91c547ba_0068_4a1b_b094_6d5384369a11.slice - libcontainer container kubepods-besteffort-pod91c547ba_0068_4a1b_b094_6d5384369a11.slice. Jul 11 05:26:14.985841 systemd[1]: Created slice kubepods-besteffort-podd4f8ae59_9b75_4bfb_9233_81e4fbbf2d3c.slice - libcontainer container kubepods-besteffort-podd4f8ae59_9b75_4bfb_9233_81e4fbbf2d3c.slice. Jul 11 05:26:14.992017 systemd[1]: Created slice kubepods-besteffort-pod1220c157_2296_4582_921e_541213f521b3.slice - libcontainer container kubepods-besteffort-pod1220c157_2296_4582_921e_541213f521b3.slice. Jul 11 05:26:14.996355 systemd[1]: Created slice kubepods-besteffort-pod205cd791_926f_4205_af66_55debe66fc3e.slice - libcontainer container kubepods-besteffort-pod205cd791_926f_4205_af66_55debe66fc3e.slice. Jul 11 05:26:15.046544 kubelet[2738]: I0711 05:26:15.046485 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj9gb\" (UniqueName: \"kubernetes.io/projected/31b0dd7f-d0d7-4755-b27e-2f781cc6e274-kube-api-access-nj9gb\") pod \"calico-apiserver-f96bbcdc6-68nnj\" (UID: \"31b0dd7f-d0d7-4755-b27e-2f781cc6e274\") " pod="calico-apiserver/calico-apiserver-f96bbcdc6-68nnj" Jul 11 05:26:15.046544 kubelet[2738]: I0711 05:26:15.046527 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1220c157-2296-4582-921e-541213f521b3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-pjrfk\" (UID: \"1220c157-2296-4582-921e-541213f521b3\") " pod="calico-system/goldmane-768f4c5c69-pjrfk" Jul 11 05:26:15.046544 kubelet[2738]: I0711 05:26:15.046544 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d43cbae-800f-4501-b753-018adbf45af6-config-volume\") pod \"coredns-668d6bf9bc-dq2kh\" (UID: \"6d43cbae-800f-4501-b753-018adbf45af6\") " pod="kube-system/coredns-668d6bf9bc-dq2kh" Jul 11 05:26:15.046544 kubelet[2738]: I0711 05:26:15.046566 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8n9\" (UniqueName: \"kubernetes.io/projected/91c547ba-0068-4a1b-b094-6d5384369a11-kube-api-access-wf8n9\") pod \"calico-apiserver-f96bbcdc6-p2npb\" (UID: \"91c547ba-0068-4a1b-b094-6d5384369a11\") " pod="calico-apiserver/calico-apiserver-f96bbcdc6-p2npb" Jul 11 05:26:15.046827 kubelet[2738]: I0711 05:26:15.046583 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pz7\" (UniqueName: \"kubernetes.io/projected/1220c157-2296-4582-921e-541213f521b3-kube-api-access-q4pz7\") pod \"goldmane-768f4c5c69-pjrfk\" (UID: \"1220c157-2296-4582-921e-541213f521b3\") " pod="calico-system/goldmane-768f4c5c69-pjrfk" Jul 11 05:26:15.046827 kubelet[2738]: I0711 05:26:15.046599 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c-tigera-ca-bundle\") pod \"calico-kube-controllers-848489d7b5-pdfzk\" (UID: \"d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c\") " pod="calico-system/calico-kube-controllers-848489d7b5-pdfzk" Jul 11 05:26:15.046827 kubelet[2738]: I0711 05:26:15.046616 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91c547ba-0068-4a1b-b094-6d5384369a11-calico-apiserver-certs\") pod \"calico-apiserver-f96bbcdc6-p2npb\" (UID: \"91c547ba-0068-4a1b-b094-6d5384369a11\") " pod="calico-apiserver/calico-apiserver-f96bbcdc6-p2npb" Jul 11 05:26:15.046827 kubelet[2738]: I0711 05:26:15.046712 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1220c157-2296-4582-921e-541213f521b3-config\") pod \"goldmane-768f4c5c69-pjrfk\" (UID: \"1220c157-2296-4582-921e-541213f521b3\") " pod="calico-system/goldmane-768f4c5c69-pjrfk" Jul 11 05:26:15.046938 kubelet[2738]: I0711 05:26:15.046829 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46dv\" (UniqueName: \"kubernetes.io/projected/6d43cbae-800f-4501-b753-018adbf45af6-kube-api-access-x46dv\") pod \"coredns-668d6bf9bc-dq2kh\" (UID: \"6d43cbae-800f-4501-b753-018adbf45af6\") " pod="kube-system/coredns-668d6bf9bc-dq2kh" Jul 11 05:26:15.046938 kubelet[2738]: I0711 05:26:15.046873 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ef4965-81f9-473f-8191-161880640e2e-config-volume\") pod \"coredns-668d6bf9bc-tlzld\" (UID: \"e6ef4965-81f9-473f-8191-161880640e2e\") " pod="kube-system/coredns-668d6bf9bc-tlzld" Jul 11 05:26:15.046938 kubelet[2738]: I0711 05:26:15.046895 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcn56\" (UniqueName: \"kubernetes.io/projected/e6ef4965-81f9-473f-8191-161880640e2e-kube-api-access-dcn56\") pod \"coredns-668d6bf9bc-tlzld\" (UID: \"e6ef4965-81f9-473f-8191-161880640e2e\") " pod="kube-system/coredns-668d6bf9bc-tlzld" Jul 11 05:26:15.047011 kubelet[2738]: I0711 05:26:15.046956 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31b0dd7f-d0d7-4755-b27e-2f781cc6e274-calico-apiserver-certs\") pod \"calico-apiserver-f96bbcdc6-68nnj\" (UID: \"31b0dd7f-d0d7-4755-b27e-2f781cc6e274\") " pod="calico-apiserver/calico-apiserver-f96bbcdc6-68nnj" Jul 11 05:26:15.047011 kubelet[2738]: I0711 05:26:15.046974 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1220c157-2296-4582-921e-541213f521b3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-pjrfk\" (UID: \"1220c157-2296-4582-921e-541213f521b3\") " pod="calico-system/goldmane-768f4c5c69-pjrfk" Jul 11 05:26:15.047011 kubelet[2738]: I0711 05:26:15.046988 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/205cd791-926f-4205-af66-55debe66fc3e-whisker-ca-bundle\") pod \"whisker-76b888b94d-2tp2p\" (UID: \"205cd791-926f-4205-af66-55debe66fc3e\") " pod="calico-system/whisker-76b888b94d-2tp2p" Jul 11 05:26:15.047011 kubelet[2738]: I0711 05:26:15.047007 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fsfw\" (UniqueName: \"kubernetes.io/projected/205cd791-926f-4205-af66-55debe66fc3e-kube-api-access-8fsfw\") pod \"whisker-76b888b94d-2tp2p\" (UID: \"205cd791-926f-4205-af66-55debe66fc3e\") " pod="calico-system/whisker-76b888b94d-2tp2p" Jul 11 05:26:15.047105 kubelet[2738]: I0711 05:26:15.047022 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bbnx\" (UniqueName: \"kubernetes.io/projected/d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c-kube-api-access-2bbnx\") pod \"calico-kube-controllers-848489d7b5-pdfzk\" (UID: \"d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c\") " pod="calico-system/calico-kube-controllers-848489d7b5-pdfzk" Jul 11 05:26:15.047105 kubelet[2738]: I0711 05:26:15.047083 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/205cd791-926f-4205-af66-55debe66fc3e-whisker-backend-key-pair\") pod \"whisker-76b888b94d-2tp2p\" (UID: \"205cd791-926f-4205-af66-55debe66fc3e\") " pod="calico-system/whisker-76b888b94d-2tp2p" Jul 11 05:26:15.257592 containerd[1591]: time="2025-07-11T05:26:15.257539102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlzld,Uid:e6ef4965-81f9-473f-8191-161880640e2e,Namespace:kube-system,Attempt:0,}" Jul 11 05:26:15.272802 containerd[1591]: time="2025-07-11T05:26:15.272720250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-68nnj,Uid:31b0dd7f-d0d7-4755-b27e-2f781cc6e274,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:26:15.278268 containerd[1591]: time="2025-07-11T05:26:15.278233630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq2kh,Uid:6d43cbae-800f-4501-b753-018adbf45af6,Namespace:kube-system,Attempt:0,}" Jul 11 05:26:15.283298 containerd[1591]: time="2025-07-11T05:26:15.282989513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-p2npb,Uid:91c547ba-0068-4a1b-b094-6d5384369a11,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:26:15.292583 containerd[1591]: time="2025-07-11T05:26:15.291744764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-848489d7b5-pdfzk,Uid:d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:15.296062 containerd[1591]: time="2025-07-11T05:26:15.296023850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pjrfk,Uid:1220c157-2296-4582-921e-541213f521b3,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:15.299463 containerd[1591]: time="2025-07-11T05:26:15.299242129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b888b94d-2tp2p,Uid:205cd791-926f-4205-af66-55debe66fc3e,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:15.374196 containerd[1591]: time="2025-07-11T05:26:15.374129394Z" level=error msg="Failed to destroy network for sandbox \"3f0da620df9e4041ebc7895ab2ac9fdcb7a3592614aaff23ec3aecb44ca3bb69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.404414 containerd[1591]: time="2025-07-11T05:26:15.402892981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlzld,Uid:e6ef4965-81f9-473f-8191-161880640e2e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f0da620df9e4041ebc7895ab2ac9fdcb7a3592614aaff23ec3aecb44ca3bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.404414 containerd[1591]: time="2025-07-11T05:26:15.403944341Z" level=error msg="Failed to destroy network for sandbox \"a5f5d3535a512b8d3ee978bff6a26229715211e9894d87f5fbb54e25b9bcf0cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.407031 containerd[1591]: time="2025-07-11T05:26:15.406931403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-p2npb,Uid:91c547ba-0068-4a1b-b094-6d5384369a11,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f5d3535a512b8d3ee978bff6a26229715211e9894d87f5fbb54e25b9bcf0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.416279 containerd[1591]: time="2025-07-11T05:26:15.416092870Z" level=error msg="Failed to destroy network for sandbox \"deea43f949e0c30e2473f2184c4075b2dc6b04ac222b3347a2c6ca500fe9d25f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.416839 containerd[1591]: time="2025-07-11T05:26:15.416622878Z" level=error msg="Failed to destroy network for sandbox \"934b2686e3eaa9c96f263e91480ad1cd3aef36acba9ef06f2210d7f8e1de9e2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.417042 kubelet[2738]: E0711 05:26:15.416986 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f0da620df9e4041ebc7895ab2ac9fdcb7a3592614aaff23ec3aecb44ca3bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.417365 kubelet[2738]: E0711 05:26:15.417083 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f0da620df9e4041ebc7895ab2ac9fdcb7a3592614aaff23ec3aecb44ca3bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tlzld" Jul 11 05:26:15.417365 kubelet[2738]: E0711 05:26:15.417107 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f0da620df9e4041ebc7895ab2ac9fdcb7a3592614aaff23ec3aecb44ca3bb69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tlzld" Jul 11 05:26:15.417365 kubelet[2738]: E0711 05:26:15.417088 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f5d3535a512b8d3ee978bff6a26229715211e9894d87f5fbb54e25b9bcf0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.417448 kubelet[2738]: E0711 05:26:15.417148 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tlzld_kube-system(e6ef4965-81f9-473f-8191-161880640e2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tlzld_kube-system(e6ef4965-81f9-473f-8191-161880640e2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f0da620df9e4041ebc7895ab2ac9fdcb7a3592614aaff23ec3aecb44ca3bb69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tlzld" podUID="e6ef4965-81f9-473f-8191-161880640e2e" Jul 11 05:26:15.417448 kubelet[2738]: E0711 05:26:15.417175 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f5d3535a512b8d3ee978bff6a26229715211e9894d87f5fbb54e25b9bcf0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f96bbcdc6-p2npb" Jul 11 05:26:15.417448 kubelet[2738]: E0711 05:26:15.417199 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f5d3535a512b8d3ee978bff6a26229715211e9894d87f5fbb54e25b9bcf0cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f96bbcdc6-p2npb" Jul 11 05:26:15.417557 kubelet[2738]: E0711 05:26:15.417242 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f96bbcdc6-p2npb_calico-apiserver(91c547ba-0068-4a1b-b094-6d5384369a11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f96bbcdc6-p2npb_calico-apiserver(91c547ba-0068-4a1b-b094-6d5384369a11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f5d3535a512b8d3ee978bff6a26229715211e9894d87f5fbb54e25b9bcf0cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f96bbcdc6-p2npb" podUID="91c547ba-0068-4a1b-b094-6d5384369a11" Jul 11 05:26:15.423862 containerd[1591]: time="2025-07-11T05:26:15.423352387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b888b94d-2tp2p,Uid:205cd791-926f-4205-af66-55debe66fc3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"deea43f949e0c30e2473f2184c4075b2dc6b04ac222b3347a2c6ca500fe9d25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.424449 kubelet[2738]: E0711 05:26:15.424348 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deea43f949e0c30e2473f2184c4075b2dc6b04ac222b3347a2c6ca500fe9d25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.424449 kubelet[2738]: E0711 05:26:15.424433 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deea43f949e0c30e2473f2184c4075b2dc6b04ac222b3347a2c6ca500fe9d25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76b888b94d-2tp2p" Jul 11 05:26:15.424537 kubelet[2738]: E0711 05:26:15.424460 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deea43f949e0c30e2473f2184c4075b2dc6b04ac222b3347a2c6ca500fe9d25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76b888b94d-2tp2p" Jul 11 05:26:15.424537 kubelet[2738]: E0711 05:26:15.424518 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76b888b94d-2tp2p_calico-system(205cd791-926f-4205-af66-55debe66fc3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76b888b94d-2tp2p_calico-system(205cd791-926f-4205-af66-55debe66fc3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"deea43f949e0c30e2473f2184c4075b2dc6b04ac222b3347a2c6ca500fe9d25f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76b888b94d-2tp2p" podUID="205cd791-926f-4205-af66-55debe66fc3e" Jul 11 05:26:15.424678 containerd[1591]: time="2025-07-11T05:26:15.424616536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-848489d7b5-pdfzk,Uid:d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"934b2686e3eaa9c96f263e91480ad1cd3aef36acba9ef06f2210d7f8e1de9e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.425036 kubelet[2738]: E0711 05:26:15.424872 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934b2686e3eaa9c96f263e91480ad1cd3aef36acba9ef06f2210d7f8e1de9e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.425036 kubelet[2738]: E0711 05:26:15.424966 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934b2686e3eaa9c96f263e91480ad1cd3aef36acba9ef06f2210d7f8e1de9e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-848489d7b5-pdfzk" Jul 11 05:26:15.425036 kubelet[2738]: E0711 05:26:15.424992 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934b2686e3eaa9c96f263e91480ad1cd3aef36acba9ef06f2210d7f8e1de9e2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-848489d7b5-pdfzk" Jul 11 05:26:15.425262 kubelet[2738]: E0711 05:26:15.425072 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-848489d7b5-pdfzk_calico-system(d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-848489d7b5-pdfzk_calico-system(d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"934b2686e3eaa9c96f263e91480ad1cd3aef36acba9ef06f2210d7f8e1de9e2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-848489d7b5-pdfzk" podUID="d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c" Jul 11 05:26:15.426925 containerd[1591]: time="2025-07-11T05:26:15.426781301Z" level=error msg="Failed to destroy network for sandbox \"6884704d0231e06a33560e07f08a19ffe9ae9d4d1c43a5f7955e2109be0d2491\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.428880 containerd[1591]: time="2025-07-11T05:26:15.428852029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-68nnj,Uid:31b0dd7f-d0d7-4755-b27e-2f781cc6e274,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6884704d0231e06a33560e07f08a19ffe9ae9d4d1c43a5f7955e2109be0d2491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.429221 kubelet[2738]: E0711 05:26:15.429188 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6884704d0231e06a33560e07f08a19ffe9ae9d4d1c43a5f7955e2109be0d2491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.429400 kubelet[2738]: E0711 05:26:15.429238 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6884704d0231e06a33560e07f08a19ffe9ae9d4d1c43a5f7955e2109be0d2491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f96bbcdc6-68nnj" Jul 11 05:26:15.429400 kubelet[2738]: E0711 05:26:15.429260 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6884704d0231e06a33560e07f08a19ffe9ae9d4d1c43a5f7955e2109be0d2491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f96bbcdc6-68nnj" Jul 11 05:26:15.429400 kubelet[2738]: E0711 05:26:15.429316 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f96bbcdc6-68nnj_calico-apiserver(31b0dd7f-d0d7-4755-b27e-2f781cc6e274)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f96bbcdc6-68nnj_calico-apiserver(31b0dd7f-d0d7-4755-b27e-2f781cc6e274)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6884704d0231e06a33560e07f08a19ffe9ae9d4d1c43a5f7955e2109be0d2491\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f96bbcdc6-68nnj" podUID="31b0dd7f-d0d7-4755-b27e-2f781cc6e274" Jul 11 05:26:15.429776 containerd[1591]: time="2025-07-11T05:26:15.429683304Z" level=error msg="Failed to destroy network for sandbox \"812dd5dedbcd2da0a187dd7c887a46cd61d8ff34b2e051fe9c8bb05005c2ce79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.431168 containerd[1591]: time="2025-07-11T05:26:15.431142170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq2kh,Uid:6d43cbae-800f-4501-b753-018adbf45af6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"812dd5dedbcd2da0a187dd7c887a46cd61d8ff34b2e051fe9c8bb05005c2ce79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.431473 kubelet[2738]: E0711 05:26:15.431419 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812dd5dedbcd2da0a187dd7c887a46cd61d8ff34b2e051fe9c8bb05005c2ce79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.431592 kubelet[2738]: E0711 05:26:15.431575 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812dd5dedbcd2da0a187dd7c887a46cd61d8ff34b2e051fe9c8bb05005c2ce79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq2kh" Jul 11 05:26:15.431821 kubelet[2738]: E0711 05:26:15.431791 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"812dd5dedbcd2da0a187dd7c887a46cd61d8ff34b2e051fe9c8bb05005c2ce79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq2kh" Jul 11 05:26:15.431976 kubelet[2738]: E0711 05:26:15.431941 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dq2kh_kube-system(6d43cbae-800f-4501-b753-018adbf45af6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dq2kh_kube-system(6d43cbae-800f-4501-b753-018adbf45af6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"812dd5dedbcd2da0a187dd7c887a46cd61d8ff34b2e051fe9c8bb05005c2ce79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dq2kh" podUID="6d43cbae-800f-4501-b753-018adbf45af6" Jul 11 05:26:15.436148 containerd[1591]: time="2025-07-11T05:26:15.436102298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 05:26:15.445837 containerd[1591]: time="2025-07-11T05:26:15.445769606Z" level=error msg="Failed to destroy network for sandbox \"cce28be7b40c2ff550fbf649bfb12b209d5b82f71022d49428bbfa71e03086d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.447168 containerd[1591]: time="2025-07-11T05:26:15.447133624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pjrfk,Uid:1220c157-2296-4582-921e-541213f521b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce28be7b40c2ff550fbf649bfb12b209d5b82f71022d49428bbfa71e03086d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.447386 kubelet[2738]: E0711 05:26:15.447353 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce28be7b40c2ff550fbf649bfb12b209d5b82f71022d49428bbfa71e03086d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:15.447453 kubelet[2738]: E0711 05:26:15.447408 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce28be7b40c2ff550fbf649bfb12b209d5b82f71022d49428bbfa71e03086d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-pjrfk" Jul 11 05:26:15.447453 kubelet[2738]: E0711 05:26:15.447430 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce28be7b40c2ff550fbf649bfb12b209d5b82f71022d49428bbfa71e03086d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-pjrfk" Jul 11 05:26:15.447553 kubelet[2738]: E0711 05:26:15.447479 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-pjrfk_calico-system(1220c157-2296-4582-921e-541213f521b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-pjrfk_calico-system(1220c157-2296-4582-921e-541213f521b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cce28be7b40c2ff550fbf649bfb12b209d5b82f71022d49428bbfa71e03086d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-pjrfk" podUID="1220c157-2296-4582-921e-541213f521b3" Jul 11 05:26:15.872414 systemd[1]: run-netns-cni\x2d324c94aa\x2de67c\x2dad77\x2df92a\x2df9e6b085cb3f.mount: Deactivated successfully. Jul 11 05:26:16.347586 systemd[1]: Created slice kubepods-besteffort-poda236dc34_4c45_4ad9_84b1_58388e163ac6.slice - libcontainer container kubepods-besteffort-poda236dc34_4c45_4ad9_84b1_58388e163ac6.slice. Jul 11 05:26:16.350207 containerd[1591]: time="2025-07-11T05:26:16.350141316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m4jb,Uid:a236dc34-4c45-4ad9-84b1-58388e163ac6,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:16.649157 containerd[1591]: time="2025-07-11T05:26:16.649037578Z" level=error msg="Failed to destroy network for sandbox \"045b85bf1d219e7f9c5d349fceb5af6bfa401b746039e56c85338d46e368169d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:16.650717 containerd[1591]: time="2025-07-11T05:26:16.650644672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m4jb,Uid:a236dc34-4c45-4ad9-84b1-58388e163ac6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"045b85bf1d219e7f9c5d349fceb5af6bfa401b746039e56c85338d46e368169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:16.651038 kubelet[2738]: E0711 05:26:16.650982 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045b85bf1d219e7f9c5d349fceb5af6bfa401b746039e56c85338d46e368169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:26:16.651390 kubelet[2738]: E0711 05:26:16.651065 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045b85bf1d219e7f9c5d349fceb5af6bfa401b746039e56c85338d46e368169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2m4jb" Jul 11 05:26:16.651390 kubelet[2738]: E0711 05:26:16.651088 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"045b85bf1d219e7f9c5d349fceb5af6bfa401b746039e56c85338d46e368169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2m4jb" Jul 11 05:26:16.651390 kubelet[2738]: E0711 05:26:16.651151 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2m4jb_calico-system(a236dc34-4c45-4ad9-84b1-58388e163ac6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2m4jb_calico-system(a236dc34-4c45-4ad9-84b1-58388e163ac6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"045b85bf1d219e7f9c5d349fceb5af6bfa401b746039e56c85338d46e368169d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2m4jb" podUID="a236dc34-4c45-4ad9-84b1-58388e163ac6" Jul 11 05:26:16.652064 systemd[1]: run-netns-cni\x2d043ae222\x2dbc9f\x2dd4c9\x2ddd39\x2dca89f85851ea.mount: Deactivated successfully. Jul 11 05:26:16.673981 kubelet[2738]: I0711 05:26:16.673937 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:26:20.213596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount190871462.mount: Deactivated successfully. Jul 11 05:26:20.931681 containerd[1591]: time="2025-07-11T05:26:20.931607831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:20.932406 containerd[1591]: time="2025-07-11T05:26:20.932366758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 05:26:20.944456 containerd[1591]: time="2025-07-11T05:26:20.944400036Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:20.946339 containerd[1591]: time="2025-07-11T05:26:20.946301842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:20.946838 containerd[1591]: time="2025-07-11T05:26:20.946805699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.510252753s" Jul 11 05:26:20.946838 containerd[1591]: time="2025-07-11T05:26:20.946835546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 05:26:20.954641 containerd[1591]: time="2025-07-11T05:26:20.954586852Z" level=info msg="CreateContainer within sandbox \"e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 05:26:20.983814 containerd[1591]: time="2025-07-11T05:26:20.983759720Z" level=info msg="Container bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:20.999237 containerd[1591]: time="2025-07-11T05:26:20.999177662Z" level=info msg="CreateContainer within sandbox \"e5becd222bc31b97a7744efa26e4e4724a886da886ad66e7044a370c17c2a039\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7\"" Jul 11 05:26:20.999965 containerd[1591]: time="2025-07-11T05:26:20.999794872Z" level=info msg="StartContainer for \"bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7\"" Jul 11 05:26:21.001238 containerd[1591]: time="2025-07-11T05:26:21.001211255Z" level=info msg="connecting to shim bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7" address="unix:///run/containerd/s/669e33e5383caaa1d6ce0dc5ac63e7aa79b9adac8c3b8ae28ab64a0d2f42d545" protocol=ttrpc version=3 Jul 11 05:26:21.035898 systemd[1]: Started cri-containerd-bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7.scope - libcontainer container bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7. Jul 11 05:26:21.084448 containerd[1591]: time="2025-07-11T05:26:21.084387695Z" level=info msg="StartContainer for \"bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7\" returns successfully" Jul 11 05:26:21.167105 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 05:26:21.167242 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 05:26:21.389606 kubelet[2738]: I0711 05:26:21.389551 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/205cd791-926f-4205-af66-55debe66fc3e-whisker-backend-key-pair\") pod \"205cd791-926f-4205-af66-55debe66fc3e\" (UID: \"205cd791-926f-4205-af66-55debe66fc3e\") " Jul 11 05:26:21.389606 kubelet[2738]: I0711 05:26:21.389615 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/205cd791-926f-4205-af66-55debe66fc3e-whisker-ca-bundle\") pod \"205cd791-926f-4205-af66-55debe66fc3e\" (UID: \"205cd791-926f-4205-af66-55debe66fc3e\") " Jul 11 05:26:21.390083 kubelet[2738]: I0711 05:26:21.389631 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fsfw\" (UniqueName: \"kubernetes.io/projected/205cd791-926f-4205-af66-55debe66fc3e-kube-api-access-8fsfw\") pod \"205cd791-926f-4205-af66-55debe66fc3e\" (UID: \"205cd791-926f-4205-af66-55debe66fc3e\") " Jul 11 05:26:21.390594 kubelet[2738]: I0711 05:26:21.390535 2738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205cd791-926f-4205-af66-55debe66fc3e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "205cd791-926f-4205-af66-55debe66fc3e" (UID: "205cd791-926f-4205-af66-55debe66fc3e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 05:26:21.395300 systemd[1]: var-lib-kubelet-pods-205cd791\x2d926f\x2d4205\x2daf66\x2d55debe66fc3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fsfw.mount: Deactivated successfully. Jul 11 05:26:21.395418 systemd[1]: var-lib-kubelet-pods-205cd791\x2d926f\x2d4205\x2daf66\x2d55debe66fc3e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 05:26:21.395929 kubelet[2738]: I0711 05:26:21.395897 2738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205cd791-926f-4205-af66-55debe66fc3e-kube-api-access-8fsfw" (OuterVolumeSpecName: "kube-api-access-8fsfw") pod "205cd791-926f-4205-af66-55debe66fc3e" (UID: "205cd791-926f-4205-af66-55debe66fc3e"). InnerVolumeSpecName "kube-api-access-8fsfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 05:26:21.396534 kubelet[2738]: I0711 05:26:21.396512 2738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205cd791-926f-4205-af66-55debe66fc3e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "205cd791-926f-4205-af66-55debe66fc3e" (UID: "205cd791-926f-4205-af66-55debe66fc3e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 05:26:21.459305 systemd[1]: Removed slice kubepods-besteffort-pod205cd791_926f_4205_af66_55debe66fc3e.slice - libcontainer container kubepods-besteffort-pod205cd791_926f_4205_af66_55debe66fc3e.slice. Jul 11 05:26:21.474338 kubelet[2738]: I0711 05:26:21.474254 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wsjmd" podStartSLOduration=1.868739323 podStartE2EDuration="20.474217942s" podCreationTimestamp="2025-07-11 05:26:01 +0000 UTC" firstStartedPulling="2025-07-11 05:26:02.342002921 +0000 UTC m=+17.102191969" lastFinishedPulling="2025-07-11 05:26:20.94748154 +0000 UTC m=+35.707670588" observedRunningTime="2025-07-11 05:26:21.473798493 +0000 UTC m=+36.233987551" watchObservedRunningTime="2025-07-11 05:26:21.474217942 +0000 UTC m=+36.234406990" Jul 11 05:26:21.490103 kubelet[2738]: I0711 05:26:21.490035 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/205cd791-926f-4205-af66-55debe66fc3e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 05:26:21.490103 kubelet[2738]: I0711 05:26:21.490075 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8fsfw\" (UniqueName: \"kubernetes.io/projected/205cd791-926f-4205-af66-55debe66fc3e-kube-api-access-8fsfw\") on node \"localhost\" DevicePath \"\"" Jul 11 05:26:21.490103 kubelet[2738]: I0711 05:26:21.490088 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/205cd791-926f-4205-af66-55debe66fc3e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 05:26:21.523848 systemd[1]: Created slice kubepods-besteffort-pod1edb241c_bd7d_42c8_a49c_75982c64d718.slice - libcontainer container kubepods-besteffort-pod1edb241c_bd7d_42c8_a49c_75982c64d718.slice. Jul 11 05:26:21.591359 kubelet[2738]: I0711 05:26:21.591288 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edb241c-bd7d-42c8-a49c-75982c64d718-whisker-ca-bundle\") pod \"whisker-746b78d9bd-tckjs\" (UID: \"1edb241c-bd7d-42c8-a49c-75982c64d718\") " pod="calico-system/whisker-746b78d9bd-tckjs" Jul 11 05:26:21.591359 kubelet[2738]: I0711 05:26:21.591334 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtt8t\" (UniqueName: \"kubernetes.io/projected/1edb241c-bd7d-42c8-a49c-75982c64d718-kube-api-access-rtt8t\") pod \"whisker-746b78d9bd-tckjs\" (UID: \"1edb241c-bd7d-42c8-a49c-75982c64d718\") " pod="calico-system/whisker-746b78d9bd-tckjs" Jul 11 05:26:21.591359 kubelet[2738]: I0711 05:26:21.591359 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1edb241c-bd7d-42c8-a49c-75982c64d718-whisker-backend-key-pair\") pod \"whisker-746b78d9bd-tckjs\" (UID: \"1edb241c-bd7d-42c8-a49c-75982c64d718\") " pod="calico-system/whisker-746b78d9bd-tckjs" Jul 11 05:26:21.829396 containerd[1591]: time="2025-07-11T05:26:21.829347048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-746b78d9bd-tckjs,Uid:1edb241c-bd7d-42c8-a49c-75982c64d718,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:22.265840 systemd-networkd[1492]: calie40a59f1b47: Link UP Jul 11 05:26:22.266162 systemd-networkd[1492]: calie40a59f1b47: Gained carrier Jul 11 05:26:22.281414 containerd[1591]: 2025-07-11 05:26:22.119 [INFO][3881] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 05:26:22.281414 containerd[1591]: 2025-07-11 05:26:22.137 [INFO][3881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--746b78d9bd--tckjs-eth0 whisker-746b78d9bd- calico-system 1edb241c-bd7d-42c8-a49c-75982c64d718 864 0 2025-07-11 05:26:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:746b78d9bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-746b78d9bd-tckjs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie40a59f1b47 [] [] }} ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-" Jul 11 05:26:22.281414 containerd[1591]: 2025-07-11 05:26:22.137 [INFO][3881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" Jul 11 05:26:22.281414 containerd[1591]: 2025-07-11 05:26:22.214 [INFO][3895] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" HandleID="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Workload="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.214 [INFO][3895] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" HandleID="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Workload="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038c230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-746b78d9bd-tckjs", "timestamp":"2025-07-11 05:26:22.214085605 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.214 [INFO][3895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.215 [INFO][3895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.215 [INFO][3895] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.222 [INFO][3895] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" host="localhost" Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.227 [INFO][3895] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.232 [INFO][3895] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.233 [INFO][3895] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.235 [INFO][3895] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:22.281944 containerd[1591]: 2025-07-11 05:26:22.235 [INFO][3895] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" host="localhost" Jul 11 05:26:22.282163 containerd[1591]: 2025-07-11 05:26:22.237 [INFO][3895] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418 Jul 11 05:26:22.282163 containerd[1591]: 2025-07-11 05:26:22.247 [INFO][3895] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" host="localhost" Jul 11 05:26:22.282163 containerd[1591]: 2025-07-11 05:26:22.254 [INFO][3895] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" host="localhost" Jul 11 05:26:22.282163 containerd[1591]: 2025-07-11 05:26:22.254 [INFO][3895] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" host="localhost" Jul 11 05:26:22.282163 containerd[1591]: 2025-07-11 05:26:22.254 [INFO][3895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:22.282163 containerd[1591]: 2025-07-11 05:26:22.254 [INFO][3895] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" HandleID="k8s-pod-network.47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Workload="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" Jul 11 05:26:22.282283 containerd[1591]: 2025-07-11 05:26:22.258 [INFO][3881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--746b78d9bd--tckjs-eth0", GenerateName:"whisker-746b78d9bd-", Namespace:"calico-system", SelfLink:"", UID:"1edb241c-bd7d-42c8-a49c-75982c64d718", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"746b78d9bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-746b78d9bd-tckjs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie40a59f1b47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:22.282283 containerd[1591]: 2025-07-11 05:26:22.258 [INFO][3881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" Jul 11 05:26:22.282358 containerd[1591]: 2025-07-11 05:26:22.258 [INFO][3881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie40a59f1b47 ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" Jul 11 05:26:22.282358 containerd[1591]: 2025-07-11 05:26:22.266 [INFO][3881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" Jul 11 05:26:22.282399 containerd[1591]: 2025-07-11 05:26:22.268 [INFO][3881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--746b78d9bd--tckjs-eth0", GenerateName:"whisker-746b78d9bd-", Namespace:"calico-system", SelfLink:"", UID:"1edb241c-bd7d-42c8-a49c-75982c64d718", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"746b78d9bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418", Pod:"whisker-746b78d9bd-tckjs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie40a59f1b47", MAC:"5a:3a:f2:d6:93:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:22.282447 containerd[1591]: 2025-07-11 05:26:22.277 [INFO][3881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" Namespace="calico-system" Pod="whisker-746b78d9bd-tckjs" WorkloadEndpoint="localhost-k8s-whisker--746b78d9bd--tckjs-eth0" Jul 11 05:26:22.814267 containerd[1591]: time="2025-07-11T05:26:22.814187993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7\" id:\"65bc215489074cbc95c3c0cbc608deca1ad66e25abbd3ea6d52a9aa4ab85f68e\" pid:4050 exit_status:1 exited_at:{seconds:1752211582 nanos:813835420}" Jul 11 05:26:22.927108 systemd-networkd[1492]: vxlan.calico: Link UP Jul 11 05:26:22.927117 systemd-networkd[1492]: vxlan.calico: Gained carrier Jul 11 05:26:23.345234 kubelet[2738]: I0711 05:26:23.345189 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205cd791-926f-4205-af66-55debe66fc3e" path="/var/lib/kubelet/pods/205cd791-926f-4205-af66-55debe66fc3e/volumes" Jul 11 05:26:23.533076 containerd[1591]: time="2025-07-11T05:26:23.533030246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7\" id:\"8e9c33f154e0636d0ebf66c9eda472f414eb51f8adf95cd4dea4ea590f1a88bc\" pid:4147 exit_status:1 exited_at:{seconds:1752211583 nanos:532686671}" Jul 11 05:26:23.677493 containerd[1591]: time="2025-07-11T05:26:23.677347757Z" level=info msg="connecting to shim 47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418" address="unix:///run/containerd/s/5b406afa3538f88755ff60564e85899362f88b19228505b9ffb395b4c89fe0ca" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:23.706883 systemd[1]: Started cri-containerd-47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418.scope - libcontainer container 47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418. Jul 11 05:26:23.718452 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:23.755262 containerd[1591]: time="2025-07-11T05:26:23.755211611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-746b78d9bd-tckjs,Uid:1edb241c-bd7d-42c8-a49c-75982c64d718,Namespace:calico-system,Attempt:0,} returns sandbox id \"47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418\"" Jul 11 05:26:23.756558 containerd[1591]: time="2025-07-11T05:26:23.756533145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 05:26:24.140951 systemd-networkd[1492]: calie40a59f1b47: Gained IPv6LL Jul 11 05:26:24.268898 systemd-networkd[1492]: vxlan.calico: Gained IPv6LL Jul 11 05:26:26.344079 containerd[1591]: time="2025-07-11T05:26:26.343947117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-p2npb,Uid:91c547ba-0068-4a1b-b094-6d5384369a11,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:26:26.376208 containerd[1591]: time="2025-07-11T05:26:26.376129918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:26.377120 containerd[1591]: time="2025-07-11T05:26:26.377072328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 05:26:26.378558 containerd[1591]: time="2025-07-11T05:26:26.378488490Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:26.381653 containerd[1591]: time="2025-07-11T05:26:26.381600155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:26.382358 containerd[1591]: time="2025-07-11T05:26:26.382299078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.625733663s" Jul 11 05:26:26.382358 containerd[1591]: time="2025-07-11T05:26:26.382352869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 05:26:26.385914 containerd[1591]: time="2025-07-11T05:26:26.385874726Z" level=info msg="CreateContainer within sandbox \"47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 05:26:26.397150 containerd[1591]: time="2025-07-11T05:26:26.396930827Z" level=info msg="Container 36d1c91dd614e1db82d3c817eed039585003b34b92b51aa38bff7176fd8b22e4: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:26.408119 containerd[1591]: time="2025-07-11T05:26:26.407990034Z" level=info msg="CreateContainer within sandbox \"47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"36d1c91dd614e1db82d3c817eed039585003b34b92b51aa38bff7176fd8b22e4\"" Jul 11 05:26:26.408976 containerd[1591]: time="2025-07-11T05:26:26.408899171Z" level=info msg="StartContainer for \"36d1c91dd614e1db82d3c817eed039585003b34b92b51aa38bff7176fd8b22e4\"" Jul 11 05:26:26.410546 containerd[1591]: time="2025-07-11T05:26:26.410518253Z" level=info msg="connecting to shim 36d1c91dd614e1db82d3c817eed039585003b34b92b51aa38bff7176fd8b22e4" address="unix:///run/containerd/s/5b406afa3538f88755ff60564e85899362f88b19228505b9ffb395b4c89fe0ca" protocol=ttrpc version=3 Jul 11 05:26:26.435021 systemd[1]: Started cri-containerd-36d1c91dd614e1db82d3c817eed039585003b34b92b51aa38bff7176fd8b22e4.scope - libcontainer container 36d1c91dd614e1db82d3c817eed039585003b34b92b51aa38bff7176fd8b22e4. Jul 11 05:26:26.498644 containerd[1591]: time="2025-07-11T05:26:26.498601344Z" level=info msg="StartContainer for \"36d1c91dd614e1db82d3c817eed039585003b34b92b51aa38bff7176fd8b22e4\" returns successfully" Jul 11 05:26:26.501442 containerd[1591]: time="2025-07-11T05:26:26.501387548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 05:26:26.504956 systemd-networkd[1492]: cali4db813627ed: Link UP Jul 11 05:26:26.505281 systemd-networkd[1492]: cali4db813627ed: Gained carrier Jul 11 05:26:26.525323 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:45678.service - OpenSSH per-connection server daemon (10.0.0.1:45678). Jul 11 05:26:26.541328 containerd[1591]: 2025-07-11 05:26:26.390 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0 calico-apiserver-f96bbcdc6- calico-apiserver 91c547ba-0068-4a1b-b094-6d5384369a11 792 0 2025-07-11 05:25:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f96bbcdc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f96bbcdc6-p2npb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4db813627ed [] [] }} ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-" Jul 11 05:26:26.541328 containerd[1591]: 2025-07-11 05:26:26.390 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" Jul 11 05:26:26.541328 containerd[1591]: 2025-07-11 05:26:26.428 [INFO][4230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" HandleID="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Workload="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.428 [INFO][4230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" HandleID="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Workload="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f96bbcdc6-p2npb", "timestamp":"2025-07-11 05:26:26.428154857 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.428 [INFO][4230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.428 [INFO][4230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.428 [INFO][4230] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.436 [INFO][4230] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" host="localhost" Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.444 [INFO][4230] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.454 [INFO][4230] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.457 [INFO][4230] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.459 [INFO][4230] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:26.541511 containerd[1591]: 2025-07-11 05:26:26.460 [INFO][4230] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" host="localhost" Jul 11 05:26:26.541774 containerd[1591]: 2025-07-11 05:26:26.461 [INFO][4230] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e Jul 11 05:26:26.541774 containerd[1591]: 2025-07-11 05:26:26.471 [INFO][4230] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" host="localhost" Jul 11 05:26:26.541774 containerd[1591]: 2025-07-11 05:26:26.494 [INFO][4230] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" host="localhost" Jul 11 05:26:26.541774 containerd[1591]: 2025-07-11 05:26:26.494 [INFO][4230] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" host="localhost" Jul 11 05:26:26.541774 containerd[1591]: 2025-07-11 05:26:26.494 [INFO][4230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:26.541774 containerd[1591]: 2025-07-11 05:26:26.494 [INFO][4230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" HandleID="k8s-pod-network.2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Workload="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" Jul 11 05:26:26.543319 containerd[1591]: 2025-07-11 05:26:26.499 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0", GenerateName:"calico-apiserver-f96bbcdc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"91c547ba-0068-4a1b-b094-6d5384369a11", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f96bbcdc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f96bbcdc6-p2npb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4db813627ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:26.543376 containerd[1591]: 2025-07-11 05:26:26.499 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" Jul 11 05:26:26.543376 containerd[1591]: 2025-07-11 05:26:26.500 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4db813627ed ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" Jul 11 05:26:26.543376 containerd[1591]: 2025-07-11 05:26:26.505 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" Jul 11 05:26:26.543446 containerd[1591]: 2025-07-11 05:26:26.505 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0", GenerateName:"calico-apiserver-f96bbcdc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"91c547ba-0068-4a1b-b094-6d5384369a11", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f96bbcdc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e", Pod:"calico-apiserver-f96bbcdc6-p2npb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4db813627ed", MAC:"b6:10:3f:f6:be:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:26.543494 containerd[1591]: 2025-07-11 05:26:26.537 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-p2npb" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--p2npb-eth0" Jul 11 05:26:26.579695 containerd[1591]: time="2025-07-11T05:26:26.579641423Z" level=info msg="connecting to shim 2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e" address="unix:///run/containerd/s/8efc840c91c2d7e26d5affba3e75ae73dcea9484613dfda4a8abc69bd98c8927" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:26.603275 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 45678 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:26.605701 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:26.606135 systemd[1]: Started cri-containerd-2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e.scope - libcontainer container 2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e. Jul 11 05:26:26.611657 systemd-logind[1578]: New session 8 of user core. Jul 11 05:26:26.613916 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 05:26:26.619498 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:26.648592 containerd[1591]: time="2025-07-11T05:26:26.648532698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-p2npb,Uid:91c547ba-0068-4a1b-b094-6d5384369a11,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e\"" Jul 11 05:26:26.766918 sshd[4324]: Connection closed by 10.0.0.1 port 45678 Jul 11 05:26:26.767332 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:26.771926 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:45678.service: Deactivated successfully. Jul 11 05:26:26.773999 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 05:26:26.774989 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Jul 11 05:26:26.776202 systemd-logind[1578]: Removed session 8. Jul 11 05:26:28.190348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156772607.mount: Deactivated successfully. Jul 11 05:26:28.210001 containerd[1591]: time="2025-07-11T05:26:28.209943299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:28.210835 containerd[1591]: time="2025-07-11T05:26:28.210799838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 05:26:28.212257 containerd[1591]: time="2025-07-11T05:26:28.212192173Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:28.214504 containerd[1591]: time="2025-07-11T05:26:28.214466865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:28.215040 containerd[1591]: time="2025-07-11T05:26:28.215003494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.713584696s" Jul 11 05:26:28.215040 containerd[1591]: time="2025-07-11T05:26:28.215037788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 05:26:28.216483 containerd[1591]: time="2025-07-11T05:26:28.216334353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 05:26:28.217539 containerd[1591]: time="2025-07-11T05:26:28.217496476Z" level=info msg="CreateContainer within sandbox \"47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 05:26:28.239188 containerd[1591]: time="2025-07-11T05:26:28.239143794Z" level=info msg="Container ecbaf369cc29cb24b2ff952351bce8600bde7940e1b2d5a9b96feee116bf2bd5: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:28.246633 containerd[1591]: time="2025-07-11T05:26:28.246576895Z" level=info msg="CreateContainer within sandbox \"47b8c2489cedcbdf15efd516816b42f007ec5becf78a723f44fdf34cd63be418\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ecbaf369cc29cb24b2ff952351bce8600bde7940e1b2d5a9b96feee116bf2bd5\"" Jul 11 05:26:28.247290 containerd[1591]: time="2025-07-11T05:26:28.247247575Z" level=info msg="StartContainer for \"ecbaf369cc29cb24b2ff952351bce8600bde7940e1b2d5a9b96feee116bf2bd5\"" Jul 11 05:26:28.248442 containerd[1591]: time="2025-07-11T05:26:28.248411201Z" level=info msg="connecting to shim ecbaf369cc29cb24b2ff952351bce8600bde7940e1b2d5a9b96feee116bf2bd5" address="unix:///run/containerd/s/5b406afa3538f88755ff60564e85899362f88b19228505b9ffb395b4c89fe0ca" protocol=ttrpc version=3 Jul 11 05:26:28.271873 systemd[1]: Started cri-containerd-ecbaf369cc29cb24b2ff952351bce8600bde7940e1b2d5a9b96feee116bf2bd5.scope - libcontainer container ecbaf369cc29cb24b2ff952351bce8600bde7940e1b2d5a9b96feee116bf2bd5. Jul 11 05:26:28.320834 containerd[1591]: time="2025-07-11T05:26:28.320706319Z" level=info msg="StartContainer for \"ecbaf369cc29cb24b2ff952351bce8600bde7940e1b2d5a9b96feee116bf2bd5\" returns successfully" Jul 11 05:26:28.342941 containerd[1591]: time="2025-07-11T05:26:28.342879495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlzld,Uid:e6ef4965-81f9-473f-8191-161880640e2e,Namespace:kube-system,Attempt:0,}" Jul 11 05:26:28.365039 systemd-networkd[1492]: cali4db813627ed: Gained IPv6LL Jul 11 05:26:28.445271 systemd-networkd[1492]: calif0203c25d80: Link UP Jul 11 05:26:28.445999 systemd-networkd[1492]: calif0203c25d80: Gained carrier Jul 11 05:26:28.460893 containerd[1591]: 2025-07-11 05:26:28.381 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--tlzld-eth0 coredns-668d6bf9bc- kube-system e6ef4965-81f9-473f-8191-161880640e2e 786 0 2025-07-11 05:25:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-tlzld eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0203c25d80 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-" Jul 11 05:26:28.460893 containerd[1591]: 2025-07-11 05:26:28.381 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" Jul 11 05:26:28.460893 containerd[1591]: 2025-07-11 05:26:28.407 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" HandleID="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Workload="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.407 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" HandleID="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Workload="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-tlzld", "timestamp":"2025-07-11 05:26:28.407223098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.407 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.407 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.407 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.414 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" host="localhost" Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.419 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.423 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.426 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.428 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:28.461114 containerd[1591]: 2025-07-11 05:26:28.428 [INFO][4410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" host="localhost" Jul 11 05:26:28.461325 containerd[1591]: 2025-07-11 05:26:28.429 [INFO][4410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5 Jul 11 05:26:28.461325 containerd[1591]: 2025-07-11 05:26:28.433 [INFO][4410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" host="localhost" Jul 11 05:26:28.461325 containerd[1591]: 2025-07-11 05:26:28.439 [INFO][4410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" host="localhost" Jul 11 05:26:28.461325 containerd[1591]: 2025-07-11 05:26:28.439 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" host="localhost" Jul 11 05:26:28.461325 containerd[1591]: 2025-07-11 05:26:28.439 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:28.461325 containerd[1591]: 2025-07-11 05:26:28.439 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" HandleID="k8s-pod-network.01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Workload="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" Jul 11 05:26:28.461442 containerd[1591]: 2025-07-11 05:26:28.442 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlzld-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e6ef4965-81f9-473f-8191-161880640e2e", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-tlzld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0203c25d80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:28.461530 containerd[1591]: 2025-07-11 05:26:28.443 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" Jul 11 05:26:28.461530 containerd[1591]: 2025-07-11 05:26:28.443 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0203c25d80 ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" Jul 11 05:26:28.461530 containerd[1591]: 2025-07-11 05:26:28.445 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" Jul 11 05:26:28.461598 containerd[1591]: 2025-07-11 05:26:28.446 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlzld-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e6ef4965-81f9-473f-8191-161880640e2e", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5", Pod:"coredns-668d6bf9bc-tlzld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0203c25d80", MAC:"8a:a9:fb:8a:12:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:28.461598 containerd[1591]: 2025-07-11 05:26:28.457 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlzld" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlzld-eth0" Jul 11 05:26:28.484998 containerd[1591]: time="2025-07-11T05:26:28.484924160Z" level=info msg="connecting to shim 01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5" address="unix:///run/containerd/s/7b24df26631a3fb69c61e5f08c8f803c942d8ec1daa2161b53125c332f56ce37" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:28.494588 kubelet[2738]: I0711 05:26:28.494499 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-746b78d9bd-tckjs" podStartSLOduration=3.034959037 podStartE2EDuration="7.494479317s" podCreationTimestamp="2025-07-11 05:26:21 +0000 UTC" firstStartedPulling="2025-07-11 05:26:23.756304445 +0000 UTC m=+38.516493493" lastFinishedPulling="2025-07-11 05:26:28.215824725 +0000 UTC m=+42.976013773" observedRunningTime="2025-07-11 05:26:28.494120061 +0000 UTC m=+43.254309109" watchObservedRunningTime="2025-07-11 05:26:28.494479317 +0000 UTC m=+43.254668355" Jul 11 05:26:28.520957 systemd[1]: Started cri-containerd-01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5.scope - libcontainer container 01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5. Jul 11 05:26:28.533677 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:28.566108 containerd[1591]: time="2025-07-11T05:26:28.566069710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlzld,Uid:e6ef4965-81f9-473f-8191-161880640e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5\"" Jul 11 05:26:28.569111 containerd[1591]: time="2025-07-11T05:26:28.569061510Z" level=info msg="CreateContainer within sandbox \"01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:26:28.585449 containerd[1591]: time="2025-07-11T05:26:28.585390830Z" level=info msg="Container 0551e7bb8786831b42868848c370de022dd9301733e1d485ef828ec97610b22e: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:28.586232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383912548.mount: Deactivated successfully. Jul 11 05:26:28.596750 containerd[1591]: time="2025-07-11T05:26:28.596688200Z" level=info msg="CreateContainer within sandbox \"01f135e73c98caff853a0e924824616a6a7ae5411abba70d46d951bd6d6cb5b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0551e7bb8786831b42868848c370de022dd9301733e1d485ef828ec97610b22e\"" Jul 11 05:26:28.597359 containerd[1591]: time="2025-07-11T05:26:28.597287606Z" level=info msg="StartContainer for \"0551e7bb8786831b42868848c370de022dd9301733e1d485ef828ec97610b22e\"" Jul 11 05:26:28.598561 containerd[1591]: time="2025-07-11T05:26:28.598530089Z" level=info msg="connecting to shim 0551e7bb8786831b42868848c370de022dd9301733e1d485ef828ec97610b22e" address="unix:///run/containerd/s/7b24df26631a3fb69c61e5f08c8f803c942d8ec1daa2161b53125c332f56ce37" protocol=ttrpc version=3 Jul 11 05:26:28.620924 systemd[1]: Started cri-containerd-0551e7bb8786831b42868848c370de022dd9301733e1d485ef828ec97610b22e.scope - libcontainer container 0551e7bb8786831b42868848c370de022dd9301733e1d485ef828ec97610b22e. Jul 11 05:26:28.663605 containerd[1591]: time="2025-07-11T05:26:28.663557587Z" level=info msg="StartContainer for \"0551e7bb8786831b42868848c370de022dd9301733e1d485ef828ec97610b22e\" returns successfully" Jul 11 05:26:29.343519 containerd[1591]: time="2025-07-11T05:26:29.343441153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-848489d7b5-pdfzk,Uid:d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:29.343519 containerd[1591]: time="2025-07-11T05:26:29.343483402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq2kh,Uid:6d43cbae-800f-4501-b753-018adbf45af6,Namespace:kube-system,Attempt:0,}" Jul 11 05:26:29.450701 systemd-networkd[1492]: calie176d4650a7: Link UP Jul 11 05:26:29.450923 systemd-networkd[1492]: calie176d4650a7: Gained carrier Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.382 [INFO][4513] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0 coredns-668d6bf9bc- kube-system 6d43cbae-800f-4501-b753-018adbf45af6 795 0 2025-07-11 05:25:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dq2kh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie176d4650a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.382 [INFO][4513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.415 [INFO][4543] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" HandleID="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Workload="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.416 [INFO][4543] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" HandleID="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Workload="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dq2kh", "timestamp":"2025-07-11 05:26:29.415220147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.416 [INFO][4543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.416 [INFO][4543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.416 [INFO][4543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.422 [INFO][4543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.426 [INFO][4543] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.430 [INFO][4543] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.432 [INFO][4543] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.433 [INFO][4543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.433 [INFO][4543] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.435 [INFO][4543] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746 Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.438 [INFO][4543] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.443 [INFO][4543] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.443 [INFO][4543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" host="localhost" Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.443 [INFO][4543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:29.464140 containerd[1591]: 2025-07-11 05:26:29.443 [INFO][4543] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" HandleID="k8s-pod-network.172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Workload="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" Jul 11 05:26:29.464766 containerd[1591]: 2025-07-11 05:26:29.448 [INFO][4513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6d43cbae-800f-4501-b753-018adbf45af6", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dq2kh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie176d4650a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:29.464766 containerd[1591]: 2025-07-11 05:26:29.448 [INFO][4513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" Jul 11 05:26:29.464766 containerd[1591]: 2025-07-11 05:26:29.448 [INFO][4513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie176d4650a7 ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" Jul 11 05:26:29.464766 containerd[1591]: 2025-07-11 05:26:29.451 [INFO][4513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" Jul 11 05:26:29.464766 containerd[1591]: 2025-07-11 05:26:29.451 [INFO][4513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6d43cbae-800f-4501-b753-018adbf45af6", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746", Pod:"coredns-668d6bf9bc-dq2kh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie176d4650a7", MAC:"4a:28:ff:ce:55:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:29.464766 containerd[1591]: 2025-07-11 05:26:29.461 [INFO][4513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq2kh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq2kh-eth0" Jul 11 05:26:29.493628 kubelet[2738]: I0711 05:26:29.493552 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tlzld" podStartSLOduration=39.493528683 podStartE2EDuration="39.493528683s" podCreationTimestamp="2025-07-11 05:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:26:29.493020069 +0000 UTC m=+44.253209117" watchObservedRunningTime="2025-07-11 05:26:29.493528683 +0000 UTC m=+44.253717721" Jul 11 05:26:29.493871 containerd[1591]: time="2025-07-11T05:26:29.493843775Z" level=info msg="connecting to shim 172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746" address="unix:///run/containerd/s/fdc40f1b2bf1da09295d4f49fa30eff7f1b41297cc81183df287d572f19a188f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:29.526577 systemd[1]: Started cri-containerd-172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746.scope - libcontainer container 172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746. Jul 11 05:26:29.543095 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:29.559850 systemd-networkd[1492]: cali1b7eceb7dc7: Link UP Jul 11 05:26:29.560334 systemd-networkd[1492]: cali1b7eceb7dc7: Gained carrier Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.391 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0 calico-kube-controllers-848489d7b5- calico-system d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c 794 0 2025-07-11 05:26:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:848489d7b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-848489d7b5-pdfzk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1b7eceb7dc7 [] [] }} ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.392 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.421 [INFO][4551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" HandleID="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Workload="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.421 [INFO][4551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" HandleID="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Workload="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-848489d7b5-pdfzk", "timestamp":"2025-07-11 05:26:29.421218993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.421 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.443 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.443 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.523 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.530 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.535 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.537 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.539 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.539 [INFO][4551] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.540 [INFO][4551] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.545 [INFO][4551] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.552 [INFO][4551] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.552 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" host="localhost" Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.552 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:29.582333 containerd[1591]: 2025-07-11 05:26:29.552 [INFO][4551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" HandleID="k8s-pod-network.0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Workload="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" Jul 11 05:26:29.583550 containerd[1591]: 2025-07-11 05:26:29.557 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0", GenerateName:"calico-kube-controllers-848489d7b5-", Namespace:"calico-system", SelfLink:"", UID:"d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"848489d7b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-848489d7b5-pdfzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1b7eceb7dc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:29.583550 containerd[1591]: 2025-07-11 05:26:29.557 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" Jul 11 05:26:29.583550 containerd[1591]: 2025-07-11 05:26:29.557 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b7eceb7dc7 ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" Jul 11 05:26:29.583550 containerd[1591]: 2025-07-11 05:26:29.560 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" Jul 11 05:26:29.583550 containerd[1591]: 2025-07-11 05:26:29.561 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0", GenerateName:"calico-kube-controllers-848489d7b5-", Namespace:"calico-system", SelfLink:"", UID:"d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"848489d7b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d", Pod:"calico-kube-controllers-848489d7b5-pdfzk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1b7eceb7dc7", MAC:"a6:f9:f0:1b:be:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:29.583550 containerd[1591]: 2025-07-11 05:26:29.577 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" Namespace="calico-system" Pod="calico-kube-controllers-848489d7b5-pdfzk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--848489d7b5--pdfzk-eth0" Jul 11 05:26:29.584214 containerd[1591]: time="2025-07-11T05:26:29.584175932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq2kh,Uid:6d43cbae-800f-4501-b753-018adbf45af6,Namespace:kube-system,Attempt:0,} returns sandbox id \"172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746\"" Jul 11 05:26:29.596844 containerd[1591]: time="2025-07-11T05:26:29.596628829Z" level=info msg="CreateContainer within sandbox \"172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:26:29.614682 containerd[1591]: time="2025-07-11T05:26:29.614636208Z" level=info msg="connecting to shim 0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d" address="unix:///run/containerd/s/2b01e549f973ddabbb586891f2d7322524b4bb9b6ddd67808a2b67f64483a37b" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:29.616519 containerd[1591]: time="2025-07-11T05:26:29.616024485Z" level=info msg="Container a3e42509adb1ac26d5046b22fb7dbc9606deb8932e598753c0e2110d065688ac: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:29.634906 containerd[1591]: time="2025-07-11T05:26:29.634861012Z" level=info msg="CreateContainer within sandbox \"172dc6dc587aa11e351fc00ef58fe0ff87eb78935e0fd40cfac6f16d24e4f746\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3e42509adb1ac26d5046b22fb7dbc9606deb8932e598753c0e2110d065688ac\"" Jul 11 05:26:29.635601 containerd[1591]: time="2025-07-11T05:26:29.635577958Z" level=info msg="StartContainer for \"a3e42509adb1ac26d5046b22fb7dbc9606deb8932e598753c0e2110d065688ac\"" Jul 11 05:26:29.636840 containerd[1591]: time="2025-07-11T05:26:29.636819129Z" level=info msg="connecting to shim a3e42509adb1ac26d5046b22fb7dbc9606deb8932e598753c0e2110d065688ac" address="unix:///run/containerd/s/fdc40f1b2bf1da09295d4f49fa30eff7f1b41297cc81183df287d572f19a188f" protocol=ttrpc version=3 Jul 11 05:26:29.644094 systemd[1]: Started cri-containerd-0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d.scope - libcontainer container 0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d. Jul 11 05:26:29.652430 systemd[1]: Started cri-containerd-a3e42509adb1ac26d5046b22fb7dbc9606deb8932e598753c0e2110d065688ac.scope - libcontainer container a3e42509adb1ac26d5046b22fb7dbc9606deb8932e598753c0e2110d065688ac. Jul 11 05:26:29.667373 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:29.711034 containerd[1591]: time="2025-07-11T05:26:29.710990365Z" level=info msg="StartContainer for \"a3e42509adb1ac26d5046b22fb7dbc9606deb8932e598753c0e2110d065688ac\" returns successfully" Jul 11 05:26:29.715118 containerd[1591]: time="2025-07-11T05:26:29.715060860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-848489d7b5-pdfzk,Uid:d4f8ae59-9b75-4bfb-9233-81e4fbbf2d3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d\"" Jul 11 05:26:30.220892 systemd-networkd[1492]: calif0203c25d80: Gained IPv6LL Jul 11 05:26:30.342756 containerd[1591]: time="2025-07-11T05:26:30.342696219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-68nnj,Uid:31b0dd7f-d0d7-4755-b27e-2f781cc6e274,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:26:30.343099 containerd[1591]: time="2025-07-11T05:26:30.343046667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m4jb,Uid:a236dc34-4c45-4ad9-84b1-58388e163ac6,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:30.457707 systemd-networkd[1492]: calia63296e92de: Link UP Jul 11 05:26:30.459017 systemd-networkd[1492]: calia63296e92de: Gained carrier Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.393 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2m4jb-eth0 csi-node-driver- calico-system a236dc34-4c45-4ad9-84b1-58388e163ac6 684 0 2025-07-11 05:26:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2m4jb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia63296e92de [] [] }} ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.393 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-eth0" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.417 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" HandleID="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Workload="localhost-k8s-csi--node--driver--2m4jb-eth0" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.418 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" HandleID="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Workload="localhost-k8s-csi--node--driver--2m4jb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2m4jb", "timestamp":"2025-07-11 05:26:30.417876316 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.418 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.418 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.418 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.425 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.429 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.435 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.436 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.438 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.438 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.440 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131 Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.443 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.450 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.450 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" host="localhost" Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.450 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:30.476664 containerd[1591]: 2025-07-11 05:26:30.450 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" HandleID="k8s-pod-network.c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Workload="localhost-k8s-csi--node--driver--2m4jb-eth0" Jul 11 05:26:30.478066 containerd[1591]: 2025-07-11 05:26:30.453 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2m4jb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a236dc34-4c45-4ad9-84b1-58388e163ac6", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2m4jb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia63296e92de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:30.478066 containerd[1591]: 2025-07-11 05:26:30.454 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-eth0" Jul 11 05:26:30.478066 containerd[1591]: 2025-07-11 05:26:30.454 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia63296e92de ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-eth0" Jul 11 05:26:30.478066 containerd[1591]: 2025-07-11 05:26:30.459 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-eth0" Jul 11 05:26:30.478066 containerd[1591]: 2025-07-11 05:26:30.459 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2m4jb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a236dc34-4c45-4ad9-84b1-58388e163ac6", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131", Pod:"csi-node-driver-2m4jb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia63296e92de", MAC:"8e:4c:e8:10:d9:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:30.478066 containerd[1591]: 2025-07-11 05:26:30.470 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" Namespace="calico-system" Pod="csi-node-driver-2m4jb" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m4jb-eth0" Jul 11 05:26:30.593375 kubelet[2738]: I0711 05:26:30.593278 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dq2kh" podStartSLOduration=40.593257805 podStartE2EDuration="40.593257805s" podCreationTimestamp="2025-07-11 05:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:26:30.592684449 +0000 UTC m=+45.352873487" watchObservedRunningTime="2025-07-11 05:26:30.593257805 +0000 UTC m=+45.353446853" Jul 11 05:26:30.660199 systemd-networkd[1492]: cali5212373353c: Link UP Jul 11 05:26:30.660443 systemd-networkd[1492]: cali5212373353c: Gained carrier Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.389 [INFO][4715] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0 calico-apiserver-f96bbcdc6- calico-apiserver 31b0dd7f-d0d7-4755-b27e-2f781cc6e274 797 0 2025-07-11 05:25:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f96bbcdc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f96bbcdc6-68nnj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5212373353c [] [] }} ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.389 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.427 [INFO][4746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" HandleID="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Workload="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.427 [INFO][4746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" HandleID="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Workload="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f96bbcdc6-68nnj", "timestamp":"2025-07-11 05:26:30.427105997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.427 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.450 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.450 [INFO][4746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.526 [INFO][4746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.598 [INFO][4746] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.609 [INFO][4746] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.619 [INFO][4746] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.626 [INFO][4746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.627 [INFO][4746] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.633 [INFO][4746] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64 Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.641 [INFO][4746] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.651 [INFO][4746] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.651 [INFO][4746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" host="localhost" Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.651 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:30.780528 containerd[1591]: 2025-07-11 05:26:30.651 [INFO][4746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" HandleID="k8s-pod-network.08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Workload="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" Jul 11 05:26:30.781366 containerd[1591]: 2025-07-11 05:26:30.656 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0", GenerateName:"calico-apiserver-f96bbcdc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b0dd7f-d0d7-4755-b27e-2f781cc6e274", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f96bbcdc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f96bbcdc6-68nnj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5212373353c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:30.781366 containerd[1591]: 2025-07-11 05:26:30.656 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" Jul 11 05:26:30.781366 containerd[1591]: 2025-07-11 05:26:30.656 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5212373353c ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" Jul 11 05:26:30.781366 containerd[1591]: 2025-07-11 05:26:30.658 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" Jul 11 05:26:30.781366 containerd[1591]: 2025-07-11 05:26:30.659 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0", GenerateName:"calico-apiserver-f96bbcdc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"31b0dd7f-d0d7-4755-b27e-2f781cc6e274", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f96bbcdc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64", Pod:"calico-apiserver-f96bbcdc6-68nnj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5212373353c", MAC:"ee:45:44:f4:4e:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:30.781366 containerd[1591]: 2025-07-11 05:26:30.776 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" Namespace="calico-apiserver" Pod="calico-apiserver-f96bbcdc6-68nnj" WorkloadEndpoint="localhost-k8s-calico--apiserver--f96bbcdc6--68nnj-eth0" Jul 11 05:26:30.789142 containerd[1591]: time="2025-07-11T05:26:30.787589761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:30.790076 containerd[1591]: time="2025-07-11T05:26:30.790038891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 05:26:30.807756 containerd[1591]: time="2025-07-11T05:26:30.807432503Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:30.817193 containerd[1591]: time="2025-07-11T05:26:30.817134623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:30.818061 containerd[1591]: time="2025-07-11T05:26:30.817962758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.601599431s" Jul 11 05:26:30.818061 containerd[1591]: time="2025-07-11T05:26:30.818005257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 05:26:30.819869 containerd[1591]: time="2025-07-11T05:26:30.819838900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 05:26:30.822482 containerd[1591]: time="2025-07-11T05:26:30.822267261Z" level=info msg="CreateContainer within sandbox \"2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 05:26:30.823888 containerd[1591]: time="2025-07-11T05:26:30.823640549Z" level=info msg="connecting to shim c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131" address="unix:///run/containerd/s/fa34454e7ad41388227bc7dccf1649bca3390a328ab1e3bb944e80a1466ffa45" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:30.838399 containerd[1591]: time="2025-07-11T05:26:30.838298926Z" level=info msg="Container dc5d056554f2a964fa7115dcc15b99f7b351bd0c65dea8e57d8dc42422e9d4e9: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:30.846003 containerd[1591]: time="2025-07-11T05:26:30.845947819Z" level=info msg="connecting to shim 08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64" address="unix:///run/containerd/s/3dacf109375e18696319e4b0fa76a5bf451c54e9f70f92438313c1bea4ba0be7" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:30.860881 systemd-networkd[1492]: cali1b7eceb7dc7: Gained IPv6LL Jul 11 05:26:30.873098 containerd[1591]: time="2025-07-11T05:26:30.873049933Z" level=info msg="CreateContainer within sandbox \"2365a77262aff1efab9da105f1eeb0c5bf8cbc0da33482c9ea28e9677125fd3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc5d056554f2a964fa7115dcc15b99f7b351bd0c65dea8e57d8dc42422e9d4e9\"" Jul 11 05:26:30.873791 containerd[1591]: time="2025-07-11T05:26:30.873642816Z" level=info msg="StartContainer for \"dc5d056554f2a964fa7115dcc15b99f7b351bd0c65dea8e57d8dc42422e9d4e9\"" Jul 11 05:26:30.875879 containerd[1591]: time="2025-07-11T05:26:30.875845963Z" level=info msg="connecting to shim dc5d056554f2a964fa7115dcc15b99f7b351bd0c65dea8e57d8dc42422e9d4e9" address="unix:///run/containerd/s/8efc840c91c2d7e26d5affba3e75ae73dcea9484613dfda4a8abc69bd98c8927" protocol=ttrpc version=3 Jul 11 05:26:30.914958 systemd[1]: Started cri-containerd-c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131.scope - libcontainer container c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131. Jul 11 05:26:30.919950 systemd[1]: Started cri-containerd-08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64.scope - libcontainer container 08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64. Jul 11 05:26:30.921326 systemd[1]: Started cri-containerd-dc5d056554f2a964fa7115dcc15b99f7b351bd0c65dea8e57d8dc42422e9d4e9.scope - libcontainer container dc5d056554f2a964fa7115dcc15b99f7b351bd0c65dea8e57d8dc42422e9d4e9. Jul 11 05:26:30.933092 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:30.936824 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:31.112845 containerd[1591]: time="2025-07-11T05:26:31.111935023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m4jb,Uid:a236dc34-4c45-4ad9-84b1-58388e163ac6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131\"" Jul 11 05:26:31.114580 containerd[1591]: time="2025-07-11T05:26:31.114392418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f96bbcdc6-68nnj,Uid:31b0dd7f-d0d7-4755-b27e-2f781cc6e274,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64\"" Jul 11 05:26:31.115567 containerd[1591]: time="2025-07-11T05:26:31.115526727Z" level=info msg="StartContainer for \"dc5d056554f2a964fa7115dcc15b99f7b351bd0c65dea8e57d8dc42422e9d4e9\" returns successfully" Jul 11 05:26:31.117933 containerd[1591]: time="2025-07-11T05:26:31.117867142Z" level=info msg="CreateContainer within sandbox \"08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 05:26:31.129861 containerd[1591]: time="2025-07-11T05:26:31.129807191Z" level=info msg="Container a3afc3aa6c61f82b9afa03d09d56d8bc0b469d61c16dffb6c15b4008582d4c65: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:31.137975 containerd[1591]: time="2025-07-11T05:26:31.137933370Z" level=info msg="CreateContainer within sandbox \"08681306cca1636a8e24bdbb0695f2aebf95154936810454023308d1a749ce64\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a3afc3aa6c61f82b9afa03d09d56d8bc0b469d61c16dffb6c15b4008582d4c65\"" Jul 11 05:26:31.138532 containerd[1591]: time="2025-07-11T05:26:31.138460509Z" level=info msg="StartContainer for \"a3afc3aa6c61f82b9afa03d09d56d8bc0b469d61c16dffb6c15b4008582d4c65\"" Jul 11 05:26:31.139831 containerd[1591]: time="2025-07-11T05:26:31.139782241Z" level=info msg="connecting to shim a3afc3aa6c61f82b9afa03d09d56d8bc0b469d61c16dffb6c15b4008582d4c65" address="unix:///run/containerd/s/3dacf109375e18696319e4b0fa76a5bf451c54e9f70f92438313c1bea4ba0be7" protocol=ttrpc version=3 Jul 11 05:26:31.163885 systemd[1]: Started cri-containerd-a3afc3aa6c61f82b9afa03d09d56d8bc0b469d61c16dffb6c15b4008582d4c65.scope - libcontainer container a3afc3aa6c61f82b9afa03d09d56d8bc0b469d61c16dffb6c15b4008582d4c65. Jul 11 05:26:31.244944 systemd-networkd[1492]: calie176d4650a7: Gained IPv6LL Jul 11 05:26:31.299843 containerd[1591]: time="2025-07-11T05:26:31.299789592Z" level=info msg="StartContainer for \"a3afc3aa6c61f82b9afa03d09d56d8bc0b469d61c16dffb6c15b4008582d4c65\" returns successfully" Jul 11 05:26:31.343285 containerd[1591]: time="2025-07-11T05:26:31.343238686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pjrfk,Uid:1220c157-2296-4582-921e-541213f521b3,Namespace:calico-system,Attempt:0,}" Jul 11 05:26:31.532834 kubelet[2738]: I0711 05:26:31.532725 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f96bbcdc6-68nnj" podStartSLOduration=32.532685856 podStartE2EDuration="32.532685856s" podCreationTimestamp="2025-07-11 05:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:26:31.527697368 +0000 UTC m=+46.287886436" watchObservedRunningTime="2025-07-11 05:26:31.532685856 +0000 UTC m=+46.292874894" Jul 11 05:26:31.548532 kubelet[2738]: I0711 05:26:31.548407 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f96bbcdc6-p2npb" podStartSLOduration=28.379320492 podStartE2EDuration="32.548380565s" podCreationTimestamp="2025-07-11 05:25:59 +0000 UTC" firstStartedPulling="2025-07-11 05:26:26.650148374 +0000 UTC m=+41.410337422" lastFinishedPulling="2025-07-11 05:26:30.819208447 +0000 UTC m=+45.579397495" observedRunningTime="2025-07-11 05:26:31.5462745 +0000 UTC m=+46.306463558" watchObservedRunningTime="2025-07-11 05:26:31.548380565 +0000 UTC m=+46.308569613" Jul 11 05:26:31.564965 systemd-networkd[1492]: calia63296e92de: Gained IPv6LL Jul 11 05:26:31.662863 systemd-networkd[1492]: cali7a2b83bb0dd: Link UP Jul 11 05:26:31.664406 systemd-networkd[1492]: cali7a2b83bb0dd: Gained carrier Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.573 [INFO][4957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0 goldmane-768f4c5c69- calico-system 1220c157-2296-4582-921e-541213f521b3 798 0 2025-07-11 05:26:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-pjrfk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7a2b83bb0dd [] [] }} ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.574 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.611 [INFO][4972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" HandleID="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Workload="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.611 [INFO][4972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" HandleID="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Workload="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131f20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-pjrfk", "timestamp":"2025-07-11 05:26:31.611329031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.611 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.611 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.611 [INFO][4972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.620 [INFO][4972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.625 [INFO][4972] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.629 [INFO][4972] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.632 [INFO][4972] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.636 [INFO][4972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.636 [INFO][4972] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.638 [INFO][4972] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7 Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.642 [INFO][4972] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.650 [INFO][4972] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.650 [INFO][4972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" host="localhost" Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.650 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:26:31.683463 containerd[1591]: 2025-07-11 05:26:31.650 [INFO][4972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" HandleID="k8s-pod-network.effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Workload="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" Jul 11 05:26:31.684317 containerd[1591]: 2025-07-11 05:26:31.656 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1220c157-2296-4582-921e-541213f521b3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-pjrfk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a2b83bb0dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:31.684317 containerd[1591]: 2025-07-11 05:26:31.656 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" Jul 11 05:26:31.684317 containerd[1591]: 2025-07-11 05:26:31.656 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a2b83bb0dd ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" Jul 11 05:26:31.684317 containerd[1591]: 2025-07-11 05:26:31.665 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" Jul 11 05:26:31.684317 containerd[1591]: 2025-07-11 05:26:31.666 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1220c157-2296-4582-921e-541213f521b3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 26, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7", Pod:"goldmane-768f4c5c69-pjrfk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a2b83bb0dd", MAC:"a2:a8:a2:81:57:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:26:31.684317 containerd[1591]: 2025-07-11 05:26:31.679 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" Namespace="calico-system" Pod="goldmane-768f4c5c69-pjrfk" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--pjrfk-eth0" Jul 11 05:26:31.740890 containerd[1591]: time="2025-07-11T05:26:31.740791970Z" level=info msg="connecting to shim effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7" address="unix:///run/containerd/s/d9e778f457e8d357b18afc54f764336860219488561f760ce56d8d2c3ebbd798" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:26:31.795997 systemd[1]: Started cri-containerd-effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7.scope - libcontainer container effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7. Jul 11 05:26:31.798464 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:51320.service - OpenSSH per-connection server daemon (10.0.0.1:51320). Jul 11 05:26:31.821978 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:26:31.866439 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 51320 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:31.871646 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:31.875278 containerd[1591]: time="2025-07-11T05:26:31.875068788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-pjrfk,Uid:1220c157-2296-4582-921e-541213f521b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7\"" Jul 11 05:26:31.881956 systemd-logind[1578]: New session 9 of user core. Jul 11 05:26:31.890923 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 05:26:31.949278 systemd-networkd[1492]: cali5212373353c: Gained IPv6LL Jul 11 05:26:32.038870 sshd[5042]: Connection closed by 10.0.0.1 port 51320 Jul 11 05:26:32.040956 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:32.046388 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:51320.service: Deactivated successfully. Jul 11 05:26:32.048643 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 05:26:32.049906 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Jul 11 05:26:32.051339 systemd-logind[1578]: Removed session 9. Jul 11 05:26:32.502810 kubelet[2738]: I0711 05:26:32.502723 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:26:33.164966 systemd-networkd[1492]: cali7a2b83bb0dd: Gained IPv6LL Jul 11 05:26:34.667603 containerd[1591]: time="2025-07-11T05:26:34.667537962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:34.668304 containerd[1591]: time="2025-07-11T05:26:34.668281328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 05:26:34.669603 containerd[1591]: time="2025-07-11T05:26:34.669561832Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:34.671464 containerd[1591]: time="2025-07-11T05:26:34.671434758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:34.672012 containerd[1591]: time="2025-07-11T05:26:34.671976494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.852112838s" Jul 11 05:26:34.672070 containerd[1591]: time="2025-07-11T05:26:34.672012672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 05:26:34.684614 containerd[1591]: time="2025-07-11T05:26:34.683905949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 05:26:34.700946 containerd[1591]: time="2025-07-11T05:26:34.700893668Z" level=info msg="CreateContainer within sandbox \"0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 05:26:34.715636 containerd[1591]: time="2025-07-11T05:26:34.715587461Z" level=info msg="Container 9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:34.727025 containerd[1591]: time="2025-07-11T05:26:34.726711403Z" level=info msg="CreateContainer within sandbox \"0ae68c3979b892893f921152294432ee37a56d734d4dceb925633105ecc42f1d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3\"" Jul 11 05:26:34.727611 containerd[1591]: time="2025-07-11T05:26:34.727581236Z" level=info msg="StartContainer for \"9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3\"" Jul 11 05:26:34.728883 containerd[1591]: time="2025-07-11T05:26:34.728862471Z" level=info msg="connecting to shim 9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3" address="unix:///run/containerd/s/2b01e549f973ddabbb586891f2d7322524b4bb9b6ddd67808a2b67f64483a37b" protocol=ttrpc version=3 Jul 11 05:26:34.758031 systemd[1]: Started cri-containerd-9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3.scope - libcontainer container 9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3. Jul 11 05:26:34.926822 containerd[1591]: time="2025-07-11T05:26:34.926676628Z" level=info msg="StartContainer for \"9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3\" returns successfully" Jul 11 05:26:35.531875 kubelet[2738]: I0711 05:26:35.531778 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-848489d7b5-pdfzk" podStartSLOduration=28.565149697 podStartE2EDuration="33.531745622s" podCreationTimestamp="2025-07-11 05:26:02 +0000 UTC" firstStartedPulling="2025-07-11 05:26:29.717023776 +0000 UTC m=+44.477212824" lastFinishedPulling="2025-07-11 05:26:34.683619701 +0000 UTC m=+49.443808749" observedRunningTime="2025-07-11 05:26:35.528612671 +0000 UTC m=+50.288801719" watchObservedRunningTime="2025-07-11 05:26:35.531745622 +0000 UTC m=+50.291934680" Jul 11 05:26:35.578403 containerd[1591]: time="2025-07-11T05:26:35.578328040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3\" id:\"d90997cd7e021abfe50f219d0a628c272c7b9ae194698d9e476bdcf5780889df\" pid:5132 exited_at:{seconds:1752211595 nanos:565690559}" Jul 11 05:26:36.030813 containerd[1591]: time="2025-07-11T05:26:36.030750462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:36.031560 containerd[1591]: time="2025-07-11T05:26:36.031512994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 05:26:36.032884 containerd[1591]: time="2025-07-11T05:26:36.032849251Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:36.034991 containerd[1591]: time="2025-07-11T05:26:36.034938162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:36.035309 containerd[1591]: time="2025-07-11T05:26:36.035279252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.350665415s" Jul 11 05:26:36.035382 containerd[1591]: time="2025-07-11T05:26:36.035310992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 05:26:36.036393 containerd[1591]: time="2025-07-11T05:26:36.036366844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 05:26:36.038555 containerd[1591]: time="2025-07-11T05:26:36.038482174Z" level=info msg="CreateContainer within sandbox \"c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 05:26:36.050962 containerd[1591]: time="2025-07-11T05:26:36.050902997Z" level=info msg="Container b12c95c9cd69be9636feeb49f391382ffa4573fa74e6d9cfe4f637df9983451f: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:36.063234 containerd[1591]: time="2025-07-11T05:26:36.063171665Z" level=info msg="CreateContainer within sandbox \"c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b12c95c9cd69be9636feeb49f391382ffa4573fa74e6d9cfe4f637df9983451f\"" Jul 11 05:26:36.063942 containerd[1591]: time="2025-07-11T05:26:36.063814952Z" level=info msg="StartContainer for \"b12c95c9cd69be9636feeb49f391382ffa4573fa74e6d9cfe4f637df9983451f\"" Jul 11 05:26:36.066870 containerd[1591]: time="2025-07-11T05:26:36.066809833Z" level=info msg="connecting to shim b12c95c9cd69be9636feeb49f391382ffa4573fa74e6d9cfe4f637df9983451f" address="unix:///run/containerd/s/fa34454e7ad41388227bc7dccf1649bca3390a328ab1e3bb944e80a1466ffa45" protocol=ttrpc version=3 Jul 11 05:26:36.095946 systemd[1]: Started cri-containerd-b12c95c9cd69be9636feeb49f391382ffa4573fa74e6d9cfe4f637df9983451f.scope - libcontainer container b12c95c9cd69be9636feeb49f391382ffa4573fa74e6d9cfe4f637df9983451f. Jul 11 05:26:36.336983 containerd[1591]: time="2025-07-11T05:26:36.336849800Z" level=info msg="StartContainer for \"b12c95c9cd69be9636feeb49f391382ffa4573fa74e6d9cfe4f637df9983451f\" returns successfully" Jul 11 05:26:37.064463 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:51330.service - OpenSSH per-connection server daemon (10.0.0.1:51330). Jul 11 05:26:37.138180 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 51330 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:37.140211 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:37.145139 systemd-logind[1578]: New session 10 of user core. Jul 11 05:26:37.160033 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 05:26:37.457593 sshd[5183]: Connection closed by 10.0.0.1 port 51330 Jul 11 05:26:37.458726 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:37.463596 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:51330.service: Deactivated successfully. Jul 11 05:26:37.465799 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 05:26:37.466694 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Jul 11 05:26:37.468655 systemd-logind[1578]: Removed session 10. Jul 11 05:26:38.064494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042616993.mount: Deactivated successfully. Jul 11 05:26:39.311559 containerd[1591]: time="2025-07-11T05:26:39.311506955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:39.312329 containerd[1591]: time="2025-07-11T05:26:39.312293301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 05:26:39.313660 containerd[1591]: time="2025-07-11T05:26:39.313612226Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:39.316077 containerd[1591]: time="2025-07-11T05:26:39.315995048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:39.316700 containerd[1591]: time="2025-07-11T05:26:39.316669954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.280272272s" Jul 11 05:26:39.316791 containerd[1591]: time="2025-07-11T05:26:39.316702164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 05:26:39.321158 containerd[1591]: time="2025-07-11T05:26:39.321128952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 05:26:39.325300 containerd[1591]: time="2025-07-11T05:26:39.325277347Z" level=info msg="CreateContainer within sandbox \"effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 05:26:39.332453 containerd[1591]: time="2025-07-11T05:26:39.332231297Z" level=info msg="Container 1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:39.342092 containerd[1591]: time="2025-07-11T05:26:39.342047418Z" level=info msg="CreateContainer within sandbox \"effad9764c07b6447b94396f5db6346402280a0109dff9c5eb6ee8d8c24a92f7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2\"" Jul 11 05:26:39.345484 containerd[1591]: time="2025-07-11T05:26:39.345426068Z" level=info msg="StartContainer for \"1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2\"" Jul 11 05:26:39.346956 containerd[1591]: time="2025-07-11T05:26:39.346923839Z" level=info msg="connecting to shim 1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2" address="unix:///run/containerd/s/d9e778f457e8d357b18afc54f764336860219488561f760ce56d8d2c3ebbd798" protocol=ttrpc version=3 Jul 11 05:26:39.377875 systemd[1]: Started cri-containerd-1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2.scope - libcontainer container 1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2. Jul 11 05:26:39.428228 containerd[1591]: time="2025-07-11T05:26:39.428166728Z" level=info msg="StartContainer for \"1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2\" returns successfully" Jul 11 05:26:39.609359 containerd[1591]: time="2025-07-11T05:26:39.609222527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2\" id:\"c4795352b429fcc3c1a6507c55238a572344178f4ac89d93698fba58401cab33\" pid:5252 exit_status:1 exited_at:{seconds:1752211599 nanos:608851120}" Jul 11 05:26:40.616577 containerd[1591]: time="2025-07-11T05:26:40.616433090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2\" id:\"3fa950ecfd124f4face80d14bababe092c14a562f2c8a9871f2a363f6f37a6dd\" pid:5279 exit_status:1 exited_at:{seconds:1752211600 nanos:616083865}" Jul 11 05:26:40.913098 containerd[1591]: time="2025-07-11T05:26:40.912955863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:40.914056 containerd[1591]: time="2025-07-11T05:26:40.914023687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 05:26:40.915434 containerd[1591]: time="2025-07-11T05:26:40.915391804Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:40.917646 containerd[1591]: time="2025-07-11T05:26:40.917591592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:26:40.918094 containerd[1591]: time="2025-07-11T05:26:40.918061174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.596908437s" Jul 11 05:26:40.918094 containerd[1591]: time="2025-07-11T05:26:40.918091070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 05:26:40.920260 containerd[1591]: time="2025-07-11T05:26:40.920235594Z" level=info msg="CreateContainer within sandbox \"c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 05:26:40.928770 containerd[1591]: time="2025-07-11T05:26:40.928701450Z" level=info msg="Container 2ef81d2910642aaf2a3a45dc9fe88f99af0275305a5404ac03027e7bc04d2414: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:26:40.939338 containerd[1591]: time="2025-07-11T05:26:40.939297704Z" level=info msg="CreateContainer within sandbox \"c2e5228884ce41018b547b544d24251b578796b43dde7d504fa0a17223b56131\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2ef81d2910642aaf2a3a45dc9fe88f99af0275305a5404ac03027e7bc04d2414\"" Jul 11 05:26:40.940093 containerd[1591]: time="2025-07-11T05:26:40.939939679Z" level=info msg="StartContainer for \"2ef81d2910642aaf2a3a45dc9fe88f99af0275305a5404ac03027e7bc04d2414\"" Jul 11 05:26:40.941690 containerd[1591]: time="2025-07-11T05:26:40.941649557Z" level=info msg="connecting to shim 2ef81d2910642aaf2a3a45dc9fe88f99af0275305a5404ac03027e7bc04d2414" address="unix:///run/containerd/s/fa34454e7ad41388227bc7dccf1649bca3390a328ab1e3bb944e80a1466ffa45" protocol=ttrpc version=3 Jul 11 05:26:40.965906 systemd[1]: Started cri-containerd-2ef81d2910642aaf2a3a45dc9fe88f99af0275305a5404ac03027e7bc04d2414.scope - libcontainer container 2ef81d2910642aaf2a3a45dc9fe88f99af0275305a5404ac03027e7bc04d2414. Jul 11 05:26:41.008850 containerd[1591]: time="2025-07-11T05:26:41.008771139Z" level=info msg="StartContainer for \"2ef81d2910642aaf2a3a45dc9fe88f99af0275305a5404ac03027e7bc04d2414\" returns successfully" Jul 11 05:26:41.408911 kubelet[2738]: I0711 05:26:41.408863 2738 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 05:26:41.408911 kubelet[2738]: I0711 05:26:41.408908 2738 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 05:26:41.799781 kubelet[2738]: I0711 05:26:41.799689 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2m4jb" podStartSLOduration=29.9954912 podStartE2EDuration="39.799667758s" podCreationTimestamp="2025-07-11 05:26:02 +0000 UTC" firstStartedPulling="2025-07-11 05:26:31.114698052 +0000 UTC m=+45.874887100" lastFinishedPulling="2025-07-11 05:26:40.91887462 +0000 UTC m=+55.679063658" observedRunningTime="2025-07-11 05:26:41.798550372 +0000 UTC m=+56.558739420" watchObservedRunningTime="2025-07-11 05:26:41.799667758 +0000 UTC m=+56.559856806" Jul 11 05:26:41.800013 kubelet[2738]: I0711 05:26:41.799888 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-pjrfk" podStartSLOduration=33.356096036 podStartE2EDuration="40.799881579s" podCreationTimestamp="2025-07-11 05:26:01 +0000 UTC" firstStartedPulling="2025-07-11 05:26:31.877219396 +0000 UTC m=+46.637408444" lastFinishedPulling="2025-07-11 05:26:39.321004939 +0000 UTC m=+54.081193987" observedRunningTime="2025-07-11 05:26:39.547408497 +0000 UTC m=+54.307597545" watchObservedRunningTime="2025-07-11 05:26:41.799881579 +0000 UTC m=+56.560070627" Jul 11 05:26:42.474778 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:49862.service - OpenSSH per-connection server daemon (10.0.0.1:49862). Jul 11 05:26:42.550019 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 49862 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:42.551866 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:42.556488 systemd-logind[1578]: New session 11 of user core. Jul 11 05:26:42.562843 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 05:26:42.713340 sshd[5332]: Connection closed by 10.0.0.1 port 49862 Jul 11 05:26:42.713760 sshd-session[5329]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:42.728109 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:49862.service: Deactivated successfully. Jul 11 05:26:42.730493 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 05:26:42.731363 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Jul 11 05:26:42.734695 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:49874.service - OpenSSH per-connection server daemon (10.0.0.1:49874). Jul 11 05:26:42.737626 systemd-logind[1578]: Removed session 11. Jul 11 05:26:42.788090 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 49874 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:42.789593 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:42.794215 systemd-logind[1578]: New session 12 of user core. Jul 11 05:26:42.805858 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 05:26:42.956203 sshd[5349]: Connection closed by 10.0.0.1 port 49874 Jul 11 05:26:42.956717 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:42.969907 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:49874.service: Deactivated successfully. Jul 11 05:26:42.977130 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 05:26:42.979324 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Jul 11 05:26:42.985212 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:49884.service - OpenSSH per-connection server daemon (10.0.0.1:49884). Jul 11 05:26:42.988019 systemd-logind[1578]: Removed session 12. Jul 11 05:26:43.044030 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 49884 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:43.045320 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:43.049510 systemd-logind[1578]: New session 13 of user core. Jul 11 05:26:43.072865 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 05:26:43.181835 sshd[5363]: Connection closed by 10.0.0.1 port 49884 Jul 11 05:26:43.182203 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:43.186160 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:49884.service: Deactivated successfully. Jul 11 05:26:43.188807 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 05:26:43.190714 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Jul 11 05:26:43.192513 systemd-logind[1578]: Removed session 13. Jul 11 05:26:47.967193 containerd[1591]: time="2025-07-11T05:26:47.967117162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3\" id:\"6963cd0b779bd7ee07ae125d5484d038d4ed92b7b2b9d9ffe5a7df8ab9a59308\" pid:5400 exited_at:{seconds:1752211607 nanos:966717343}" Jul 11 05:26:48.206140 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:51582.service - OpenSSH per-connection server daemon (10.0.0.1:51582). Jul 11 05:26:48.291293 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 51582 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:48.293231 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:48.298219 systemd-logind[1578]: New session 14 of user core. Jul 11 05:26:48.301887 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 05:26:48.453115 sshd[5414]: Connection closed by 10.0.0.1 port 51582 Jul 11 05:26:48.453524 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:48.458532 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:51582.service: Deactivated successfully. Jul 11 05:26:48.460894 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 05:26:48.462776 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Jul 11 05:26:48.464242 systemd-logind[1578]: Removed session 14. Jul 11 05:26:53.472476 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:51598.service - OpenSSH per-connection server daemon (10.0.0.1:51598). Jul 11 05:26:53.540565 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 51598 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:53.541082 containerd[1591]: time="2025-07-11T05:26:53.541041212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7\" id:\"f8493ebee8c3aa1c3ff4632f17d3466fcb9e9699cd99549085c887bced6e63dc\" pid:5446 exited_at:{seconds:1752211613 nanos:540711785}" Jul 11 05:26:53.542592 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:53.548285 systemd-logind[1578]: New session 15 of user core. Jul 11 05:26:53.553912 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 05:26:53.838249 sshd[5461]: Connection closed by 10.0.0.1 port 51598 Jul 11 05:26:53.838597 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:53.842637 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:51598.service: Deactivated successfully. Jul 11 05:26:53.844651 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 05:26:53.845597 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Jul 11 05:26:53.846689 systemd-logind[1578]: Removed session 15. Jul 11 05:26:57.829652 containerd[1591]: time="2025-07-11T05:26:57.829598274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2\" id:\"03421ae9ffbe8cfac49e8d450c6a2476ed6ae23ff5199759c4a09bd035840c31\" pid:5486 exited_at:{seconds:1752211617 nanos:829284149}" Jul 11 05:26:58.855098 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:59966.service - OpenSSH per-connection server daemon (10.0.0.1:59966). Jul 11 05:26:58.934171 sshd[5499]: Accepted publickey for core from 10.0.0.1 port 59966 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:26:58.935996 sshd-session[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:26:58.943385 systemd-logind[1578]: New session 16 of user core. Jul 11 05:26:58.956962 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 05:26:59.415115 sshd[5502]: Connection closed by 10.0.0.1 port 59966 Jul 11 05:26:59.415534 sshd-session[5499]: pam_unix(sshd:session): session closed for user core Jul 11 05:26:59.420474 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:59966.service: Deactivated successfully. Jul 11 05:26:59.422718 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 05:26:59.424503 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Jul 11 05:26:59.425943 systemd-logind[1578]: Removed session 16. Jul 11 05:27:04.436562 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:59980.service - OpenSSH per-connection server daemon (10.0.0.1:59980). Jul 11 05:27:04.497898 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 59980 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:04.499344 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:04.503467 systemd-logind[1578]: New session 17 of user core. Jul 11 05:27:04.512048 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 05:27:04.644766 sshd[5524]: Connection closed by 10.0.0.1 port 59980 Jul 11 05:27:04.642995 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:04.655541 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:59980.service: Deactivated successfully. Jul 11 05:27:04.658430 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 05:27:04.663796 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Jul 11 05:27:04.671040 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:59990.service - OpenSSH per-connection server daemon (10.0.0.1:59990). Jul 11 05:27:04.673121 systemd-logind[1578]: Removed session 17. Jul 11 05:27:04.726248 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 59990 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:04.727705 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:04.732135 systemd-logind[1578]: New session 18 of user core. Jul 11 05:27:04.744871 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 05:27:05.069065 sshd[5540]: Connection closed by 10.0.0.1 port 59990 Jul 11 05:27:05.069763 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:05.081888 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:59990.service: Deactivated successfully. Jul 11 05:27:05.084105 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 05:27:05.085156 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Jul 11 05:27:05.088745 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:60000.service - OpenSSH per-connection server daemon (10.0.0.1:60000). Jul 11 05:27:05.090128 systemd-logind[1578]: Removed session 18. Jul 11 05:27:05.149701 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 60000 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:05.151515 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:05.156711 systemd-logind[1578]: New session 19 of user core. Jul 11 05:27:05.163888 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 05:27:05.589238 containerd[1591]: time="2025-07-11T05:27:05.588797722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c1e07fdb1b0c46d5769e8db70e92e043630afd4e29bea8a94b1c858f9707cd3\" id:\"64a55900335542e4ae2cb70847d567d60f97bf73ef45e04873fee53911e5203b\" pid:5574 exited_at:{seconds:1752211625 nanos:588420719}" Jul 11 05:27:05.979070 sshd[5554]: Connection closed by 10.0.0.1 port 60000 Jul 11 05:27:05.980100 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:05.989709 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:60000.service: Deactivated successfully. Jul 11 05:27:05.992373 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 05:27:05.993983 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Jul 11 05:27:05.998575 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:60002.service - OpenSSH per-connection server daemon (10.0.0.1:60002). Jul 11 05:27:06.002414 systemd-logind[1578]: Removed session 19. Jul 11 05:27:06.061443 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 60002 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:06.063240 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:06.067974 systemd-logind[1578]: New session 20 of user core. Jul 11 05:27:06.085864 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 05:27:06.375371 sshd[5598]: Connection closed by 10.0.0.1 port 60002 Jul 11 05:27:06.377312 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:06.388967 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:60002.service: Deactivated successfully. Jul 11 05:27:06.391983 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 05:27:06.394023 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Jul 11 05:27:06.397819 systemd-logind[1578]: Removed session 20. Jul 11 05:27:06.399387 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:60016.service - OpenSSH per-connection server daemon (10.0.0.1:60016). Jul 11 05:27:06.454987 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 60016 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:06.456235 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:06.460822 systemd-logind[1578]: New session 21 of user core. Jul 11 05:27:06.474859 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 05:27:06.582668 sshd[5613]: Connection closed by 10.0.0.1 port 60016 Jul 11 05:27:06.583058 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:06.588404 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:60016.service: Deactivated successfully. Jul 11 05:27:06.590354 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 05:27:06.591203 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Jul 11 05:27:06.592580 systemd-logind[1578]: Removed session 21. Jul 11 05:27:07.690010 kubelet[2738]: I0711 05:27:07.689955 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:27:10.617992 containerd[1591]: time="2025-07-11T05:27:10.617938526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e28269351b14657dbb6c987e7396d262d6b464180a2b8210fb5dd2b733348c2\" id:\"f89cc5e52ce8563f43ad967ea4bc9c82f6e468b1f8182edcf00cb4c70ed101ac\" pid:5639 exited_at:{seconds:1752211630 nanos:617587875}" Jul 11 05:27:11.599672 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:37264.service - OpenSSH per-connection server daemon (10.0.0.1:37264). Jul 11 05:27:11.669759 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 37264 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:11.672108 sshd-session[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:11.677344 systemd-logind[1578]: New session 22 of user core. Jul 11 05:27:11.683858 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 05:27:11.842772 sshd[5658]: Connection closed by 10.0.0.1 port 37264 Jul 11 05:27:11.843438 sshd-session[5655]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:11.848206 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:37264.service: Deactivated successfully. Jul 11 05:27:11.850821 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 05:27:11.852068 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Jul 11 05:27:11.854646 systemd-logind[1578]: Removed session 22. Jul 11 05:27:16.857205 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). Jul 11 05:27:16.916081 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:16.917580 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:16.922481 systemd-logind[1578]: New session 23 of user core. Jul 11 05:27:16.932008 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 05:27:17.078578 sshd[5676]: Connection closed by 10.0.0.1 port 37280 Jul 11 05:27:17.078951 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:17.082526 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:37280.service: Deactivated successfully. Jul 11 05:27:17.084813 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 05:27:17.086925 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Jul 11 05:27:17.088107 systemd-logind[1578]: Removed session 23. Jul 11 05:27:22.091993 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:36684.service - OpenSSH per-connection server daemon (10.0.0.1:36684). Jul 11 05:27:22.165763 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 36684 ssh2: RSA SHA256:AZqyHwk2ulkRrC8wnptF2KHrEnFxAIZ3geErY5ALWdc Jul 11 05:27:22.167743 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:27:22.172356 systemd-logind[1578]: New session 24 of user core. Jul 11 05:27:22.181867 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 05:27:22.471618 sshd[5695]: Connection closed by 10.0.0.1 port 36684 Jul 11 05:27:22.472877 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Jul 11 05:27:22.478947 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:36684.service: Deactivated successfully. Jul 11 05:27:22.481304 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 05:27:22.482342 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Jul 11 05:27:22.483631 systemd-logind[1578]: Removed session 24. Jul 11 05:27:23.539849 containerd[1591]: time="2025-07-11T05:27:23.539763090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd79ce0464254676bd79bf9f896d4838e1b4306d9750d8fab8533eafdb62acb7\" id:\"dbb1b625f14e31c44e9541765407acf979d740075bbb4cb92a111dd0a080e740\" pid:5719 exited_at:{seconds:1752211643 nanos:539370693}"