Jul 10 05:39:18.869604 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 03:48:39 -00 2025 Jul 10 05:39:18.869625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 05:39:18.869636 kernel: BIOS-provided physical RAM map: Jul 10 05:39:18.869643 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 05:39:18.869650 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 05:39:18.869656 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 05:39:18.869664 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 05:39:18.869671 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 05:39:18.869682 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 05:39:18.869689 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 05:39:18.869695 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 10 05:39:18.869702 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 05:39:18.869709 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 05:39:18.869715 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 05:39:18.869726 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 05:39:18.869733 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 05:39:18.869743 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 05:39:18.869750 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 05:39:18.869757 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 05:39:18.869764 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 05:39:18.869771 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 05:39:18.869778 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 05:39:18.869785 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 05:39:18.869792 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 05:39:18.869799 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 05:39:18.869808 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 05:39:18.869815 kernel: NX (Execute Disable) protection: active Jul 10 05:39:18.869822 kernel: APIC: Static calls initialized Jul 10 05:39:18.869830 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 10 05:39:18.869837 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 10 05:39:18.869844 kernel: extended physical RAM map: Jul 10 05:39:18.869851 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 05:39:18.869858 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 05:39:18.869865 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 05:39:18.869873 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 05:39:18.869880 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 05:39:18.869890 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 05:39:18.869917 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 05:39:18.869924 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 10 05:39:18.869932 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 10 05:39:18.869943 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 10 05:39:18.869951 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 10 05:39:18.869960 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 10 05:39:18.869968 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 05:39:18.869975 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 05:39:18.869982 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 05:39:18.869990 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 05:39:18.869997 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 05:39:18.870004 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 05:39:18.870012 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 05:39:18.870019 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 05:39:18.870026 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 05:39:18.870036 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 05:39:18.870043 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 05:39:18.870058 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 05:39:18.870065 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 05:39:18.870072 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 05:39:18.870080 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 05:39:18.870089 kernel: efi: EFI v2.7 by EDK II Jul 10 05:39:18.870096 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 10 05:39:18.870115 kernel: random: crng init done Jul 10 05:39:18.870127 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 10 05:39:18.870134 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 10 05:39:18.870147 kernel: secureboot: Secure boot disabled Jul 10 05:39:18.870164 kernel: SMBIOS 2.8 present. Jul 10 05:39:18.870181 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 10 05:39:18.870188 kernel: DMI: Memory slots populated: 1/1 Jul 10 05:39:18.870196 kernel: Hypervisor detected: KVM Jul 10 05:39:18.870203 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 05:39:18.870210 kernel: kvm-clock: using sched offset of 5101783343 cycles Jul 10 05:39:18.870218 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 05:39:18.870226 kernel: tsc: Detected 2794.748 MHz processor Jul 10 05:39:18.870234 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 05:39:18.870251 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 05:39:18.870271 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 10 05:39:18.870279 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 05:39:18.870287 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 05:39:18.870294 kernel: Using GB pages for direct mapping Jul 10 05:39:18.870302 kernel: ACPI: Early table checksum verification disabled Jul 10 05:39:18.870309 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 10 05:39:18.870317 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 05:39:18.870327 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:39:18.870337 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:39:18.870351 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 10 05:39:18.870359 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:39:18.870367 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:39:18.870374 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:39:18.870382 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:39:18.870389 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 05:39:18.870397 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 10 05:39:18.870404 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 10 05:39:18.870414 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 10 05:39:18.870422 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 10 05:39:18.870429 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 10 05:39:18.870436 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 10 05:39:18.870444 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 10 05:39:18.870451 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 10 05:39:18.870459 kernel: No NUMA configuration found Jul 10 05:39:18.870466 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 10 05:39:18.870474 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 10 05:39:18.870481 kernel: Zone ranges: Jul 10 05:39:18.870491 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 05:39:18.870498 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 10 05:39:18.870506 kernel: Normal empty Jul 10 05:39:18.870513 kernel: Device empty Jul 10 05:39:18.870521 kernel: Movable zone start for each node Jul 10 05:39:18.870528 kernel: Early memory node ranges Jul 10 05:39:18.870536 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 05:39:18.870543 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 10 05:39:18.870554 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 10 05:39:18.870563 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 10 05:39:18.870571 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 10 05:39:18.870578 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 10 05:39:18.870585 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 10 05:39:18.870593 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 10 05:39:18.870600 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 10 05:39:18.870608 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 05:39:18.870617 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 05:39:18.870634 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 10 05:39:18.870642 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 05:39:18.870649 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 10 05:39:18.870657 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 10 05:39:18.870667 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 10 05:39:18.870675 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 10 05:39:18.870683 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 10 05:39:18.870690 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 05:39:18.870698 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 05:39:18.870708 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 05:39:18.870716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 05:39:18.870724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 05:39:18.870731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 05:39:18.870739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 05:39:18.870747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 05:39:18.870754 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 05:39:18.870762 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 05:39:18.870770 kernel: TSC deadline timer available Jul 10 05:39:18.870780 kernel: CPU topo: Max. logical packages: 1 Jul 10 05:39:18.870787 kernel: CPU topo: Max. logical dies: 1 Jul 10 05:39:18.870795 kernel: CPU topo: Max. dies per package: 1 Jul 10 05:39:18.870803 kernel: CPU topo: Max. threads per core: 1 Jul 10 05:39:18.870810 kernel: CPU topo: Num. cores per package: 4 Jul 10 05:39:18.870818 kernel: CPU topo: Num. threads per package: 4 Jul 10 05:39:18.870825 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 10 05:39:18.870833 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 05:39:18.870841 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 05:39:18.870848 kernel: kvm-guest: setup PV sched yield Jul 10 05:39:18.870858 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 10 05:39:18.870866 kernel: Booting paravirtualized kernel on KVM Jul 10 05:39:18.870874 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 05:39:18.870882 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 10 05:39:18.870890 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 10 05:39:18.870994 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 10 05:39:18.871002 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 05:39:18.871010 kernel: kvm-guest: PV spinlocks enabled Jul 10 05:39:18.871018 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 05:39:18.871030 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 05:39:18.871041 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 05:39:18.871056 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 05:39:18.871064 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 05:39:18.871072 kernel: Fallback order for Node 0: 0 Jul 10 05:39:18.871080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 10 05:39:18.871087 kernel: Policy zone: DMA32 Jul 10 05:39:18.871095 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 05:39:18.871105 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 05:39:18.871113 kernel: ftrace: allocating 40097 entries in 157 pages Jul 10 05:39:18.871121 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 05:39:18.871128 kernel: Dynamic Preempt: voluntary Jul 10 05:39:18.871136 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 05:39:18.871145 kernel: rcu: RCU event tracing is enabled. Jul 10 05:39:18.871153 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 05:39:18.871160 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 05:39:18.871168 kernel: Rude variant of Tasks RCU enabled. Jul 10 05:39:18.871178 kernel: Tracing variant of Tasks RCU enabled. Jul 10 05:39:18.871186 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 05:39:18.871196 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 05:39:18.871204 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 05:39:18.871212 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 05:39:18.871220 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 05:39:18.871228 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 05:39:18.871236 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 05:39:18.871243 kernel: Console: colour dummy device 80x25 Jul 10 05:39:18.871254 kernel: printk: legacy console [ttyS0] enabled Jul 10 05:39:18.871262 kernel: ACPI: Core revision 20240827 Jul 10 05:39:18.871270 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 05:39:18.871277 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 05:39:18.871285 kernel: x2apic enabled Jul 10 05:39:18.871293 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 05:39:18.871301 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 05:39:18.871309 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 05:39:18.871316 kernel: kvm-guest: setup PV IPIs Jul 10 05:39:18.871330 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 05:39:18.871340 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 05:39:18.871351 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 10 05:39:18.871358 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 05:39:18.871366 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 05:39:18.871374 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 05:39:18.871382 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 05:39:18.871389 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 05:39:18.871397 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 05:39:18.871408 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 05:39:18.871416 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 05:39:18.871423 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 05:39:18.871434 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 05:39:18.871442 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 05:39:18.871451 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 05:39:18.871458 kernel: x86/bugs: return thunk changed Jul 10 05:39:18.871466 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 05:39:18.871476 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 05:39:18.871489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 05:39:18.871497 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 05:39:18.871505 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 05:39:18.871513 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 05:39:18.871521 kernel: Freeing SMP alternatives memory: 32K Jul 10 05:39:18.871528 kernel: pid_max: default: 32768 minimum: 301 Jul 10 05:39:18.871536 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 05:39:18.871544 kernel: landlock: Up and running. Jul 10 05:39:18.871555 kernel: SELinux: Initializing. Jul 10 05:39:18.871565 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 05:39:18.871573 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 05:39:18.871587 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 05:39:18.871606 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 05:39:18.871621 kernel: ... version: 0 Jul 10 05:39:18.871637 kernel: ... bit width: 48 Jul 10 05:39:18.871644 kernel: ... generic registers: 6 Jul 10 05:39:18.871652 kernel: ... value mask: 0000ffffffffffff Jul 10 05:39:18.871662 kernel: ... max period: 00007fffffffffff Jul 10 05:39:18.871670 kernel: ... fixed-purpose events: 0 Jul 10 05:39:18.871678 kernel: ... event mask: 000000000000003f Jul 10 05:39:18.871686 kernel: signal: max sigframe size: 1776 Jul 10 05:39:18.871694 kernel: rcu: Hierarchical SRCU implementation. Jul 10 05:39:18.871702 kernel: rcu: Max phase no-delay instances is 400. Jul 10 05:39:18.871712 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 05:39:18.871720 kernel: smp: Bringing up secondary CPUs ... Jul 10 05:39:18.871728 kernel: smpboot: x86: Booting SMP configuration: Jul 10 05:39:18.871738 kernel: .... node #0, CPUs: #1 #2 #3 Jul 10 05:39:18.871746 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 05:39:18.871754 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 10 05:39:18.871762 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54600K init, 2368K bss, 137196K reserved, 0K cma-reserved) Jul 10 05:39:18.871770 kernel: devtmpfs: initialized Jul 10 05:39:18.871778 kernel: x86/mm: Memory block size: 128MB Jul 10 05:39:18.871786 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 10 05:39:18.871794 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 10 05:39:18.871802 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 10 05:39:18.871813 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 10 05:39:18.871821 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 10 05:39:18.871829 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 10 05:39:18.871836 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 05:39:18.871844 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 05:39:18.871852 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 05:39:18.871862 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 05:39:18.871870 kernel: audit: initializing netlink subsys (disabled) Jul 10 05:39:18.871878 kernel: audit: type=2000 audit(1752125956.561:1): state=initialized audit_enabled=0 res=1 Jul 10 05:39:18.871888 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 05:39:18.871910 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 05:39:18.871918 kernel: cpuidle: using governor menu Jul 10 05:39:18.871926 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 05:39:18.871934 kernel: dca service started, version 1.12.1 Jul 10 05:39:18.871942 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 10 05:39:18.871949 kernel: PCI: Using configuration type 1 for base access Jul 10 05:39:18.871958 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 05:39:18.871965 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 05:39:18.871977 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 05:39:18.871985 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 05:39:18.871992 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 05:39:18.872000 kernel: ACPI: Added _OSI(Module Device) Jul 10 05:39:18.872008 kernel: ACPI: Added _OSI(Processor Device) Jul 10 05:39:18.872026 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 05:39:18.872035 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 05:39:18.872062 kernel: ACPI: Interpreter enabled Jul 10 05:39:18.872070 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 05:39:18.872082 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 05:39:18.872090 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 05:39:18.872097 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 05:39:18.872105 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 05:39:18.872113 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 05:39:18.872329 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 05:39:18.872476 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 05:39:18.872603 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 05:39:18.872614 kernel: PCI host bridge to bus 0000:00 Jul 10 05:39:18.872760 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 05:39:18.872875 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 05:39:18.873005 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 05:39:18.873126 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 10 05:39:18.873236 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 10 05:39:18.873360 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 10 05:39:18.873528 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 05:39:18.873770 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 05:39:18.874024 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 05:39:18.874166 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 10 05:39:18.874287 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 10 05:39:18.874421 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 05:39:18.874557 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 05:39:18.874707 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 05:39:18.874831 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 10 05:39:18.874978 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 10 05:39:18.875111 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 10 05:39:18.875267 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 05:39:18.875410 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 10 05:39:18.875533 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 10 05:39:18.875654 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 10 05:39:18.875785 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 05:39:18.876010 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 10 05:39:18.876207 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 10 05:39:18.876336 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 10 05:39:18.876473 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 10 05:39:18.876622 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 05:39:18.876747 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 05:39:18.876887 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 05:39:18.877065 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 10 05:39:18.877188 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 10 05:39:18.877330 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 05:39:18.877468 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 10 05:39:18.877479 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 05:39:18.877487 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 05:39:18.877495 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 05:39:18.877503 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 05:39:18.877511 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 05:39:18.877519 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 05:39:18.877527 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 05:39:18.877538 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 05:39:18.877546 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 05:39:18.877555 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 05:39:18.877563 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 05:39:18.877570 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 05:39:18.877578 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 05:39:18.877586 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 05:39:18.877594 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 05:39:18.877602 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 05:39:18.877612 kernel: iommu: Default domain type: Translated Jul 10 05:39:18.877620 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 05:39:18.877628 kernel: efivars: Registered efivars operations Jul 10 05:39:18.877636 kernel: PCI: Using ACPI for IRQ routing Jul 10 05:39:18.877644 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 05:39:18.877652 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 10 05:39:18.877660 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 10 05:39:18.877668 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 10 05:39:18.877675 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 10 05:39:18.877685 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 10 05:39:18.877693 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 10 05:39:18.877701 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 10 05:39:18.877709 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 10 05:39:18.877829 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 05:39:18.877972 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 05:39:18.878103 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 05:39:18.878114 kernel: vgaarb: loaded Jul 10 05:39:18.878126 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 05:39:18.878134 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 05:39:18.878142 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 05:39:18.878150 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 05:39:18.878158 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 05:39:18.878166 kernel: pnp: PnP ACPI init Jul 10 05:39:18.878338 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 10 05:39:18.878370 kernel: pnp: PnP ACPI: found 6 devices Jul 10 05:39:18.878383 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 05:39:18.878391 kernel: NET: Registered PF_INET protocol family Jul 10 05:39:18.878399 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 05:39:18.878407 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 05:39:18.878415 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 05:39:18.878423 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 05:39:18.878432 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 05:39:18.878440 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 05:39:18.878448 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 05:39:18.878461 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 05:39:18.878471 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 05:39:18.878480 kernel: NET: Registered PF_XDP protocol family Jul 10 05:39:18.878636 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 10 05:39:18.878778 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 10 05:39:18.878923 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 05:39:18.879067 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 05:39:18.879198 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 05:39:18.879318 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 10 05:39:18.879447 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 10 05:39:18.879561 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 10 05:39:18.879572 kernel: PCI: CLS 0 bytes, default 64 Jul 10 05:39:18.879581 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 05:39:18.879589 kernel: Initialise system trusted keyrings Jul 10 05:39:18.879597 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 05:39:18.879606 kernel: Key type asymmetric registered Jul 10 05:39:18.879617 kernel: Asymmetric key parser 'x509' registered Jul 10 05:39:18.879625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 05:39:18.879634 kernel: io scheduler mq-deadline registered Jul 10 05:39:18.879644 kernel: io scheduler kyber registered Jul 10 05:39:18.879653 kernel: io scheduler bfq registered Jul 10 05:39:18.879661 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 05:39:18.879672 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 05:39:18.879680 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 05:39:18.879689 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 05:39:18.879697 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 05:39:18.879706 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 05:39:18.879714 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 05:39:18.879722 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 05:39:18.879731 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 05:39:18.879872 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 05:39:18.879888 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 05:39:18.880058 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 05:39:18.880178 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T05:39:18 UTC (1752125958) Jul 10 05:39:18.880293 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 05:39:18.880304 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 05:39:18.880312 kernel: efifb: probing for efifb Jul 10 05:39:18.880322 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 05:39:18.880333 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 05:39:18.880350 kernel: efifb: scrolling: redraw Jul 10 05:39:18.880360 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 05:39:18.880369 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 05:39:18.880377 kernel: fb0: EFI VGA frame buffer device Jul 10 05:39:18.880385 kernel: pstore: Using crash dump compression: deflate Jul 10 05:39:18.880394 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 05:39:18.880402 kernel: NET: Registered PF_INET6 protocol family Jul 10 05:39:18.880410 kernel: Segment Routing with IPv6 Jul 10 05:39:18.880419 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 05:39:18.880430 kernel: NET: Registered PF_PACKET protocol family Jul 10 05:39:18.880438 kernel: Key type dns_resolver registered Jul 10 05:39:18.880446 kernel: IPI shorthand broadcast: enabled Jul 10 05:39:18.880454 kernel: sched_clock: Marking stable (3526002697, 152790269)->(3721065756, -42272790) Jul 10 05:39:18.880462 kernel: registered taskstats version 1 Jul 10 05:39:18.880471 kernel: Loading compiled-in X.509 certificates Jul 10 05:39:18.880479 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0b89e0dc22b3b76335f64d75ef999e68b43a7102' Jul 10 05:39:18.880487 kernel: Demotion targets for Node 0: null Jul 10 05:39:18.880495 kernel: Key type .fscrypt registered Jul 10 05:39:18.880506 kernel: Key type fscrypt-provisioning registered Jul 10 05:39:18.880514 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 05:39:18.880522 kernel: ima: Allocated hash algorithm: sha1 Jul 10 05:39:18.880531 kernel: ima: No architecture policies found Jul 10 05:39:18.880539 kernel: clk: Disabling unused clocks Jul 10 05:39:18.880547 kernel: Warning: unable to open an initial console. Jul 10 05:39:18.880555 kernel: Freeing unused kernel image (initmem) memory: 54600K Jul 10 05:39:18.880564 kernel: Write protecting the kernel read-only data: 24576k Jul 10 05:39:18.880572 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 05:39:18.880583 kernel: Run /init as init process Jul 10 05:39:18.880591 kernel: with arguments: Jul 10 05:39:18.880599 kernel: /init Jul 10 05:39:18.880607 kernel: with environment: Jul 10 05:39:18.880615 kernel: HOME=/ Jul 10 05:39:18.880623 kernel: TERM=linux Jul 10 05:39:18.880631 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 05:39:18.880641 systemd[1]: Successfully made /usr/ read-only. Jul 10 05:39:18.880654 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 05:39:18.880664 systemd[1]: Detected virtualization kvm. Jul 10 05:39:18.880672 systemd[1]: Detected architecture x86-64. Jul 10 05:39:18.880681 systemd[1]: Running in initrd. Jul 10 05:39:18.880690 systemd[1]: No hostname configured, using default hostname. Jul 10 05:39:18.880698 systemd[1]: Hostname set to . Jul 10 05:39:18.880707 systemd[1]: Initializing machine ID from VM UUID. Jul 10 05:39:18.880716 systemd[1]: Queued start job for default target initrd.target. Jul 10 05:39:18.880727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 05:39:18.880738 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 05:39:18.880748 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 05:39:18.880757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 05:39:18.880766 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 05:39:18.880775 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 05:39:18.880785 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 05:39:18.880796 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 05:39:18.880805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 05:39:18.880814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 05:39:18.880822 systemd[1]: Reached target paths.target - Path Units. Jul 10 05:39:18.880831 systemd[1]: Reached target slices.target - Slice Units. Jul 10 05:39:18.880840 systemd[1]: Reached target swap.target - Swaps. Jul 10 05:39:18.880849 systemd[1]: Reached target timers.target - Timer Units. Jul 10 05:39:18.880857 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 05:39:18.880868 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 05:39:18.880877 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 05:39:18.880886 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 05:39:18.880925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 05:39:18.880934 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 05:39:18.880943 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 05:39:18.880951 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 05:39:18.880960 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 05:39:18.880969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 05:39:18.880981 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 05:39:18.880990 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 05:39:18.880999 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 05:39:18.881008 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 05:39:18.881017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 05:39:18.881026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:39:18.881035 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 05:39:18.881053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 05:39:18.881063 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 05:39:18.881072 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 05:39:18.881108 systemd-journald[220]: Collecting audit messages is disabled. Jul 10 05:39:18.881131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:39:18.881140 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 05:39:18.881149 systemd-journald[220]: Journal started Jul 10 05:39:18.881170 systemd-journald[220]: Runtime Journal (/run/log/journal/fde8dc2f8f4c4f65810decb40f2f1f8e) is 6M, max 48.5M, 42.4M free. Jul 10 05:39:18.872145 systemd-modules-load[222]: Inserted module 'overlay' Jul 10 05:39:18.883390 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 05:39:18.886085 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 05:39:18.902069 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 05:39:18.904602 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 05:39:18.906097 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 10 05:39:18.906928 kernel: Bridge firewalling registered Jul 10 05:39:18.907731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 05:39:18.909299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 05:39:18.913642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 05:39:18.925155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 05:39:18.927334 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 05:39:18.930432 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 05:39:18.933814 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 05:39:18.944035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 05:39:18.944762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:39:18.949514 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 05:39:18.961938 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 05:39:19.001709 systemd-resolved[266]: Positive Trust Anchors: Jul 10 05:39:19.001733 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 05:39:19.001762 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 05:39:19.004837 systemd-resolved[266]: Defaulting to hostname 'linux'. Jul 10 05:39:19.006216 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 05:39:19.011072 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 05:39:19.094934 kernel: SCSI subsystem initialized Jul 10 05:39:19.104928 kernel: Loading iSCSI transport class v2.0-870. Jul 10 05:39:19.117924 kernel: iscsi: registered transport (tcp) Jul 10 05:39:19.141930 kernel: iscsi: registered transport (qla4xxx) Jul 10 05:39:19.141967 kernel: QLogic iSCSI HBA Driver Jul 10 05:39:19.170986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 05:39:19.210055 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 05:39:19.212302 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 05:39:19.291881 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 05:39:19.294694 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 05:39:19.353943 kernel: raid6: avx2x4 gen() 28642 MB/s Jul 10 05:39:19.370934 kernel: raid6: avx2x2 gen() 30237 MB/s Jul 10 05:39:19.387963 kernel: raid6: avx2x1 gen() 25785 MB/s Jul 10 05:39:19.387992 kernel: raid6: using algorithm avx2x2 gen() 30237 MB/s Jul 10 05:39:19.406249 kernel: raid6: .... xor() 15820 MB/s, rmw enabled Jul 10 05:39:19.406295 kernel: raid6: using avx2x2 recovery algorithm Jul 10 05:39:19.428940 kernel: xor: automatically using best checksumming function avx Jul 10 05:39:19.607980 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 05:39:19.619606 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 05:39:19.622094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 05:39:19.656110 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 10 05:39:19.662146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 05:39:19.667353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 05:39:19.701680 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 10 05:39:19.738497 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 05:39:19.741371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 05:39:19.830882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 05:39:19.832483 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 05:39:19.870922 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 10 05:39:19.875478 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 05:39:19.881691 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 05:39:19.881740 kernel: GPT:9289727 != 19775487 Jul 10 05:39:19.881775 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 05:39:19.881797 kernel: GPT:9289727 != 19775487 Jul 10 05:39:19.881817 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 05:39:19.881838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:39:19.894915 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 05:39:19.897934 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 05:39:19.908921 kernel: libata version 3.00 loaded. Jul 10 05:39:19.913774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 05:39:19.913933 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:39:19.918380 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:39:19.921838 kernel: AES CTR mode by8 optimization enabled Jul 10 05:39:19.922089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:39:19.924787 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 05:39:19.931541 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 05:39:19.931740 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 05:39:19.935087 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 05:39:19.935293 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 05:39:19.935449 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 05:39:19.941578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 05:39:19.941732 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:39:19.945404 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 05:39:19.956453 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 05:39:19.967533 kernel: scsi host0: ahci Jul 10 05:39:19.969032 kernel: scsi host1: ahci Jul 10 05:39:19.969929 kernel: scsi host2: ahci Jul 10 05:39:19.971218 kernel: scsi host3: ahci Jul 10 05:39:19.971449 kernel: scsi host4: ahci Jul 10 05:39:19.972395 kernel: scsi host5: ahci Jul 10 05:39:19.973375 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 10 05:39:19.973396 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 10 05:39:19.974273 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 10 05:39:19.975986 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 10 05:39:19.976004 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 10 05:39:19.977708 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 10 05:39:19.986603 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 05:39:20.005345 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 05:39:20.016499 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 05:39:20.017270 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 05:39:20.018537 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 05:39:20.025066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:39:20.045010 disk-uuid[635]: Primary Header is updated. Jul 10 05:39:20.045010 disk-uuid[635]: Secondary Entries is updated. Jul 10 05:39:20.045010 disk-uuid[635]: Secondary Header is updated. Jul 10 05:39:20.049963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:39:20.056945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:39:20.067696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:39:20.290938 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 05:39:20.291053 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 05:39:20.291068 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 05:39:20.291080 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 05:39:20.291096 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 05:39:20.291931 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 05:39:20.292928 kernel: ata3.00: applying bridge limits Jul 10 05:39:20.292964 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 05:39:20.293934 kernel: ata3.00: configured for UDMA/100 Jul 10 05:39:20.294945 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 05:39:20.341482 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 05:39:20.341759 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 05:39:20.362025 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 05:39:20.780554 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 05:39:20.786518 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 05:39:20.787112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 05:39:20.787423 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 05:39:20.788721 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 05:39:20.811475 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 05:39:21.057961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:39:21.058035 disk-uuid[638]: The operation has completed successfully. Jul 10 05:39:21.091671 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 05:39:21.091790 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 05:39:21.125286 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 05:39:21.155338 sh[669]: Success Jul 10 05:39:21.174343 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 05:39:21.174380 kernel: device-mapper: uevent: version 1.0.3 Jul 10 05:39:21.175396 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 05:39:21.184922 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 05:39:21.219274 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 05:39:21.223253 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 05:39:21.245091 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 05:39:21.255101 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 05:39:21.255161 kernel: BTRFS: device fsid 511ba16f-9623-4757-a014-7759f3bcc596 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (681) Jul 10 05:39:21.256536 kernel: BTRFS info (device dm-0): first mount of filesystem 511ba16f-9623-4757-a014-7759f3bcc596 Jul 10 05:39:21.256563 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:39:21.258042 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 05:39:21.262647 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 05:39:21.264105 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 05:39:21.265553 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 05:39:21.266394 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 05:39:21.268154 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 05:39:21.296959 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Jul 10 05:39:21.299044 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:39:21.299069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:39:21.299080 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 05:39:21.306931 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:39:21.307794 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 05:39:21.311520 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 05:39:21.444949 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 05:39:21.449747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 05:39:21.468167 ignition[759]: Ignition 2.21.0 Jul 10 05:39:21.468184 ignition[759]: Stage: fetch-offline Jul 10 05:39:21.468224 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jul 10 05:39:21.468234 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:39:21.468335 ignition[759]: parsed url from cmdline: "" Jul 10 05:39:21.468339 ignition[759]: no config URL provided Jul 10 05:39:21.468345 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 05:39:21.468353 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jul 10 05:39:21.468386 ignition[759]: op(1): [started] loading QEMU firmware config module Jul 10 05:39:21.468392 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 05:39:21.482781 ignition[759]: op(1): [finished] loading QEMU firmware config module Jul 10 05:39:21.483529 ignition[759]: QEMU firmware config was not found. Ignoring... Jul 10 05:39:21.502312 systemd-networkd[855]: lo: Link UP Jul 10 05:39:21.502322 systemd-networkd[855]: lo: Gained carrier Jul 10 05:39:21.504060 systemd-networkd[855]: Enumeration completed Jul 10 05:39:21.504460 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 05:39:21.504479 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:39:21.504484 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 05:39:21.504979 systemd-networkd[855]: eth0: Link UP Jul 10 05:39:21.504983 systemd-networkd[855]: eth0: Gained carrier Jul 10 05:39:21.504999 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:39:21.508105 systemd[1]: Reached target network.target - Network. Jul 10 05:39:21.530936 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 05:39:21.537669 ignition[759]: parsing config with SHA512: f58ae5b07f1abe96653a0d76cb38c4944930339d78a444d74e5a8015dab9200f1d9879e78784fdb99acd5588dfb368b6857077ce2204f3bc5aec5f979e891860 Jul 10 05:39:21.543858 unknown[759]: fetched base config from "system" Jul 10 05:39:21.543872 unknown[759]: fetched user config from "qemu" Jul 10 05:39:21.544544 ignition[759]: fetch-offline: fetch-offline passed Jul 10 05:39:21.544611 ignition[759]: Ignition finished successfully Jul 10 05:39:21.548079 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 05:39:21.549449 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 05:39:21.550340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 05:39:21.607632 ignition[863]: Ignition 2.21.0 Jul 10 05:39:21.607646 ignition[863]: Stage: kargs Jul 10 05:39:21.607869 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 10 05:39:21.607883 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:39:21.612225 ignition[863]: kargs: kargs passed Jul 10 05:39:21.612385 ignition[863]: Ignition finished successfully Jul 10 05:39:21.618692 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 05:39:21.620803 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 05:39:21.660582 ignition[871]: Ignition 2.21.0 Jul 10 05:39:21.660596 ignition[871]: Stage: disks Jul 10 05:39:21.660717 ignition[871]: no configs at "/usr/lib/ignition/base.d" Jul 10 05:39:21.660727 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:39:21.663946 ignition[871]: disks: disks passed Jul 10 05:39:21.664084 ignition[871]: Ignition finished successfully Jul 10 05:39:21.667132 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 05:39:21.667651 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 05:39:21.667918 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 05:39:21.668390 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 05:39:21.668707 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 05:39:21.669189 systemd[1]: Reached target basic.target - Basic System. Jul 10 05:39:21.670470 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 05:39:21.700235 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 05:39:21.708009 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 05:39:21.712133 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 05:39:21.865924 kernel: EXT4-fs (vda9): mounted filesystem f2872d8e-bdd9-4186-89ae-300fdf795a28 r/w with ordered data mode. Quota mode: none. Jul 10 05:39:21.866244 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 05:39:21.868458 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 05:39:21.872230 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 05:39:21.873910 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 05:39:21.874988 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 05:39:21.875030 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 05:39:21.875053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 05:39:21.892057 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 05:39:21.894603 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 05:39:21.898938 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Jul 10 05:39:21.900971 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:39:21.901006 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:39:21.901018 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 05:39:21.905005 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 05:39:21.938296 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 05:39:21.942364 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Jul 10 05:39:21.946811 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 05:39:21.950642 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 05:39:22.046169 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 05:39:22.047597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 05:39:22.051665 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 05:39:22.066920 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:39:22.081079 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 05:39:22.127177 ignition[1003]: INFO : Ignition 2.21.0 Jul 10 05:39:22.127177 ignition[1003]: INFO : Stage: mount Jul 10 05:39:22.129199 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 05:39:22.129199 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:39:22.131720 ignition[1003]: INFO : mount: mount passed Jul 10 05:39:22.131720 ignition[1003]: INFO : Ignition finished successfully Jul 10 05:39:22.136011 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 05:39:22.138997 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 05:39:22.254105 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 05:39:22.256665 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 05:39:22.285916 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Jul 10 05:39:22.285945 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:39:22.287475 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:39:22.287489 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 05:39:22.291816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 05:39:22.328926 ignition[1032]: INFO : Ignition 2.21.0 Jul 10 05:39:22.328926 ignition[1032]: INFO : Stage: files Jul 10 05:39:22.330529 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 05:39:22.330529 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:39:22.332576 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Jul 10 05:39:22.333811 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 05:39:22.333811 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 05:39:22.336541 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 05:39:22.336541 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 05:39:22.339303 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 05:39:22.337134 unknown[1032]: wrote ssh authorized keys file for user: core Jul 10 05:39:22.341686 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 05:39:22.341686 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 10 05:39:22.377084 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 05:39:22.639625 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 05:39:22.641748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 05:39:22.654988 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 05:39:22.656793 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 05:39:22.658474 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:39:22.662905 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:39:22.662905 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:39:22.667423 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 10 05:39:22.690077 systemd-networkd[855]: eth0: Gained IPv6LL Jul 10 05:39:23.376524 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 05:39:23.830301 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:39:23.830301 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 05:39:23.834018 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 05:39:23.837728 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 05:39:23.837728 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 05:39:23.837728 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 05:39:23.842121 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 05:39:23.842121 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 05:39:23.842121 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 05:39:23.842121 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 05:39:23.863693 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 05:39:23.869111 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 05:39:23.870684 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 05:39:23.870684 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 05:39:23.870684 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 05:39:23.870684 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 05:39:23.870684 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 05:39:23.870684 ignition[1032]: INFO : files: files passed Jul 10 05:39:23.870684 ignition[1032]: INFO : Ignition finished successfully Jul 10 05:39:23.876152 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 05:39:23.878630 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 05:39:23.883779 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 05:39:23.900527 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 05:39:23.900672 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 05:39:23.903626 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 05:39:23.906192 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 05:39:23.907882 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 05:39:23.907882 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 05:39:23.911116 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 05:39:23.913527 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 05:39:23.916434 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 05:39:23.953267 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 05:39:23.953407 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 05:39:23.953979 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 05:39:23.956670 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 05:39:23.958550 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 05:39:23.960244 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 05:39:23.993643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 05:39:23.995609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 05:39:24.022239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 05:39:24.022585 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 05:39:24.024743 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 05:39:24.026843 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 05:39:24.026987 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 05:39:24.028937 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 05:39:24.029400 systemd[1]: Stopped target basic.target - Basic System. Jul 10 05:39:24.029712 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 05:39:24.030200 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 05:39:24.030519 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 05:39:24.030839 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 05:39:24.031332 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 05:39:24.031645 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 05:39:24.032003 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 05:39:24.032459 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 05:39:24.032771 systemd[1]: Stopped target swap.target - Swaps. Jul 10 05:39:24.033239 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 05:39:24.033350 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 05:39:24.051146 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 05:39:24.051505 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 05:39:24.051791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 05:39:24.057028 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 05:39:24.057610 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 05:39:24.057717 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 05:39:24.060780 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 05:39:24.060910 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 05:39:24.063312 systemd[1]: Stopped target paths.target - Path Units. Jul 10 05:39:24.065421 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 05:39:24.070027 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 05:39:24.070368 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 05:39:24.072878 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 05:39:24.073365 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 05:39:24.073457 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 05:39:24.076286 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 05:39:24.076374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 05:39:24.077972 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 05:39:24.078092 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 05:39:24.078420 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 05:39:24.078520 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 05:39:24.082397 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 05:39:24.088128 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 05:39:24.088501 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 05:39:24.088669 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 05:39:24.091029 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 05:39:24.091184 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 05:39:24.099088 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 05:39:24.099207 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 05:39:24.112340 ignition[1087]: INFO : Ignition 2.21.0 Jul 10 05:39:24.112340 ignition[1087]: INFO : Stage: umount Jul 10 05:39:24.114244 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 05:39:24.114244 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:39:24.116463 ignition[1087]: INFO : umount: umount passed Jul 10 05:39:24.116463 ignition[1087]: INFO : Ignition finished successfully Jul 10 05:39:24.118969 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 05:39:24.119128 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 05:39:24.120728 systemd[1]: Stopped target network.target - Network. Jul 10 05:39:24.122279 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 05:39:24.122361 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 05:39:24.122614 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 05:39:24.122662 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 05:39:24.122920 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 05:39:24.122984 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 05:39:24.123402 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 05:39:24.123448 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 05:39:24.123875 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 05:39:24.131038 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 05:39:24.140270 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 05:39:24.140688 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 05:39:24.146478 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 05:39:24.146627 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 05:39:24.147261 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 05:39:24.147400 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 05:39:24.149430 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 05:39:24.150384 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 05:39:24.151294 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 05:39:24.151350 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 05:39:24.154273 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 05:39:24.156145 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 05:39:24.156204 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 05:39:24.156576 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 05:39:24.156620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:39:24.161525 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 05:39:24.161580 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 05:39:24.162247 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 05:39:24.162293 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 05:39:24.166803 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 05:39:24.168404 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 05:39:24.168471 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 05:39:24.187680 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 05:39:24.189069 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 05:39:24.189669 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 05:39:24.189715 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 05:39:24.194352 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 05:39:24.194402 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 05:39:24.197273 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 05:39:24.197359 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 05:39:24.199968 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 05:39:24.200029 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 05:39:24.202561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 05:39:24.202619 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 05:39:24.206556 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 05:39:24.206809 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 05:39:24.206866 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 05:39:24.210825 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 05:39:24.210876 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 05:39:24.214122 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 05:39:24.214172 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 05:39:24.217336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 05:39:24.217388 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 05:39:24.217839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 05:39:24.217885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:39:24.224371 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 05:39:24.224431 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 05:39:24.224476 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 05:39:24.224524 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 05:39:24.224877 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 05:39:24.228095 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 05:39:24.239802 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 05:39:24.239964 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 05:39:24.329949 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 05:39:24.330097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 05:39:24.331076 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 05:39:24.331363 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 05:39:24.331418 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 05:39:24.338788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 05:39:24.367847 systemd[1]: Switching root. Jul 10 05:39:24.408072 systemd-journald[220]: Journal stopped Jul 10 05:39:25.782117 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 10 05:39:25.782187 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 05:39:25.782202 kernel: SELinux: policy capability open_perms=1 Jul 10 05:39:25.782213 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 05:39:25.782225 kernel: SELinux: policy capability always_check_network=0 Jul 10 05:39:25.782236 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 05:39:25.782251 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 05:39:25.782262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 05:39:25.782273 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 05:39:25.782285 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 05:39:25.782296 kernel: audit: type=1403 audit(1752125964.824:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 05:39:25.782309 systemd[1]: Successfully loaded SELinux policy in 59.404ms. Jul 10 05:39:25.782339 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.378ms. Jul 10 05:39:25.782352 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 05:39:25.782375 systemd[1]: Detected virtualization kvm. Jul 10 05:39:25.782390 systemd[1]: Detected architecture x86-64. Jul 10 05:39:25.782402 systemd[1]: Detected first boot. Jul 10 05:39:25.782414 systemd[1]: Initializing machine ID from VM UUID. Jul 10 05:39:25.782426 zram_generator::config[1133]: No configuration found. Jul 10 05:39:25.782438 kernel: Guest personality initialized and is inactive Jul 10 05:39:25.782450 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 05:39:25.782461 kernel: Initialized host personality Jul 10 05:39:25.782478 kernel: NET: Registered PF_VSOCK protocol family Jul 10 05:39:25.782490 systemd[1]: Populated /etc with preset unit settings. Jul 10 05:39:25.782509 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 05:39:25.782521 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 05:39:25.782533 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 05:39:25.782545 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 05:39:25.782557 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 05:39:25.782569 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 05:39:25.782581 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 05:39:25.782593 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 05:39:25.782608 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 05:39:25.782620 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 05:39:25.782632 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 05:39:25.782652 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 05:39:25.782664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 05:39:25.782677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 05:39:25.782688 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 05:39:25.782700 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 05:39:25.782713 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 05:39:25.782727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 05:39:25.782739 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 05:39:25.782751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 05:39:25.782763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 05:39:25.782775 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 05:39:25.782787 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 05:39:25.782799 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 05:39:25.782813 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 05:39:25.782825 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 05:39:25.782841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 05:39:25.782854 systemd[1]: Reached target slices.target - Slice Units. Jul 10 05:39:25.782866 systemd[1]: Reached target swap.target - Swaps. Jul 10 05:39:25.782877 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 05:39:25.782909 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 05:39:25.782922 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 05:39:25.782943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 05:39:25.782955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 05:39:25.782971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 05:39:25.782983 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 05:39:25.782995 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 05:39:25.783007 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 05:39:25.783020 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 05:39:25.783032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:39:25.783044 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 05:39:25.783056 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 05:39:25.783071 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 05:39:25.783084 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 05:39:25.783096 systemd[1]: Reached target machines.target - Containers. Jul 10 05:39:25.783108 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 05:39:25.783121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 05:39:25.783133 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 05:39:25.783145 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 05:39:25.783157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 05:39:25.783169 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 05:39:25.783183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 05:39:25.783195 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 05:39:25.783214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 05:39:25.783227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 05:39:25.783239 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 05:39:25.783251 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 05:39:25.783263 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 05:39:25.783275 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 05:39:25.783290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 05:39:25.783302 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 05:39:25.783314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 05:39:25.783326 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 05:39:25.783339 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 05:39:25.783353 kernel: loop: module loaded Jul 10 05:39:25.783366 kernel: fuse: init (API version 7.41) Jul 10 05:39:25.783378 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 05:39:25.783390 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 05:39:25.783402 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 05:39:25.783415 systemd[1]: Stopped verity-setup.service. Jul 10 05:39:25.783429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:39:25.783442 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 05:39:25.783454 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 05:39:25.783489 systemd-journald[1197]: Collecting audit messages is disabled. Jul 10 05:39:25.783532 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 05:39:25.783547 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 05:39:25.783559 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 05:39:25.783572 systemd-journald[1197]: Journal started Jul 10 05:39:25.783594 systemd-journald[1197]: Runtime Journal (/run/log/journal/fde8dc2f8f4c4f65810decb40f2f1f8e) is 6M, max 48.5M, 42.4M free. Jul 10 05:39:25.786280 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 05:39:25.786345 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 05:39:25.457273 systemd[1]: Queued start job for default target multi-user.target. Jul 10 05:39:25.481165 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 05:39:25.787975 kernel: ACPI: bus type drm_connector registered Jul 10 05:39:25.481698 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 05:39:25.791304 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 05:39:25.792422 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 05:39:25.794030 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 05:39:25.794269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 05:39:25.795769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 05:39:25.796011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 05:39:25.797430 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 05:39:25.797658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 05:39:25.799159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 05:39:25.799437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 05:39:25.800959 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 05:39:25.801179 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 05:39:25.802753 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 05:39:25.803035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 05:39:25.804525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 05:39:25.806077 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 05:39:25.807745 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 05:39:25.809386 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 05:39:25.825861 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 05:39:25.828645 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 05:39:25.832114 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 05:39:25.833488 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 05:39:25.833530 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 05:39:25.835669 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 05:39:25.845044 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 05:39:25.846558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 05:39:25.848290 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 05:39:25.852101 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 05:39:25.853327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 05:39:25.854378 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 05:39:25.855589 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 05:39:25.856693 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 05:39:25.864209 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 05:39:25.868024 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 05:39:25.871057 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 05:39:25.872375 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 05:39:25.889484 systemd-journald[1197]: Time spent on flushing to /var/log/journal/fde8dc2f8f4c4f65810decb40f2f1f8e is 15.743ms for 1072 entries. Jul 10 05:39:25.889484 systemd-journald[1197]: System Journal (/var/log/journal/fde8dc2f8f4c4f65810decb40f2f1f8e) is 8M, max 195.6M, 187.6M free. Jul 10 05:39:25.914262 kernel: loop0: detected capacity change from 0 to 146488 Jul 10 05:39:25.914293 systemd-journald[1197]: Received client request to flush runtime journal. Jul 10 05:39:25.896160 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:39:25.928354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 05:39:25.930328 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 05:39:25.932686 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 05:39:25.934932 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 05:39:25.937725 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 05:39:25.942053 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 05:39:25.965094 kernel: loop1: detected capacity change from 0 to 224512 Jul 10 05:39:25.967317 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jul 10 05:39:25.967336 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jul 10 05:39:25.976792 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 05:39:25.984813 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 05:39:25.989979 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 05:39:26.011961 kernel: loop2: detected capacity change from 0 to 114000 Jul 10 05:39:26.023823 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 05:39:26.028077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 05:39:26.070930 kernel: loop3: detected capacity change from 0 to 146488 Jul 10 05:39:26.077929 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jul 10 05:39:26.078387 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jul 10 05:39:26.085838 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 05:39:26.086938 kernel: loop4: detected capacity change from 0 to 224512 Jul 10 05:39:26.097920 kernel: loop5: detected capacity change from 0 to 114000 Jul 10 05:39:26.108518 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 05:39:26.109189 (sd-merge)[1278]: Merged extensions into '/usr'. Jul 10 05:39:26.114048 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 05:39:26.114069 systemd[1]: Reloading... Jul 10 05:39:26.209264 zram_generator::config[1305]: No configuration found. Jul 10 05:39:26.340108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:39:26.372872 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 05:39:26.422695 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 05:39:26.423021 systemd[1]: Reloading finished in 308 ms. Jul 10 05:39:26.455621 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 05:39:26.457656 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 05:39:26.473807 systemd[1]: Starting ensure-sysext.service... Jul 10 05:39:26.476074 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 05:39:26.498702 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Jul 10 05:39:26.498719 systemd[1]: Reloading... Jul 10 05:39:26.507251 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 05:39:26.507313 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 05:39:26.507720 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 05:39:26.508326 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 05:39:26.509425 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 05:39:26.509762 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 10 05:39:26.509848 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 10 05:39:26.517701 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 05:39:26.517719 systemd-tmpfiles[1343]: Skipping /boot Jul 10 05:39:26.539726 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 05:39:26.539746 systemd-tmpfiles[1343]: Skipping /boot Jul 10 05:39:26.576971 zram_generator::config[1373]: No configuration found. Jul 10 05:39:26.663139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:39:26.748882 systemd[1]: Reloading finished in 249 ms. Jul 10 05:39:26.773641 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 05:39:26.792251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 05:39:26.801693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:39:26.803194 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 05:39:26.805721 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 05:39:26.807016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 05:39:26.808266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 05:39:26.813782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 05:39:26.816910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 05:39:26.818298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 05:39:26.818429 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 05:39:26.819860 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 05:39:26.825887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 05:39:26.830517 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 05:39:26.833760 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 05:39:26.835149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:39:26.838854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 05:39:26.839186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 05:39:26.841194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 05:39:26.841438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 05:39:26.843261 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 05:39:26.843499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 05:39:26.856102 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 05:39:26.861427 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 05:39:26.868363 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:39:26.868596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 05:39:26.871091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 05:39:26.874021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 05:39:26.874945 systemd-udevd[1422]: Using default interface naming scheme 'v255'. Jul 10 05:39:26.876159 augenrules[1443]: No rules Jul 10 05:39:26.881476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 05:39:26.886250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 05:39:26.887471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 05:39:26.887673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 05:39:26.889344 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 05:39:26.894221 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 05:39:26.896026 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:39:26.898132 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 05:39:26.898444 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 05:39:26.900109 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 05:39:26.901862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 05:39:26.902282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 05:39:26.903883 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 05:39:26.904432 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 05:39:26.906063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 05:39:26.906287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 05:39:26.908347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 05:39:26.910121 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 05:39:26.910354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 05:39:26.912041 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 05:39:26.921590 systemd[1]: Finished ensure-sysext.service. Jul 10 05:39:26.936072 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 05:39:26.938852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 05:39:26.939001 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 05:39:26.943039 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 05:39:26.944975 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 05:39:26.959035 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 05:39:27.030672 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 05:39:27.065789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 05:39:27.068494 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 05:39:27.080922 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 05:39:27.093926 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 05:39:27.096169 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 05:39:27.098920 kernel: ACPI: button: Power Button [PWRF] Jul 10 05:39:27.114687 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 05:39:27.114988 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 05:39:27.115177 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 05:39:27.127241 systemd-networkd[1480]: lo: Link UP Jul 10 05:39:27.127256 systemd-networkd[1480]: lo: Gained carrier Jul 10 05:39:27.129012 systemd-networkd[1480]: Enumeration completed Jul 10 05:39:27.129121 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 05:39:27.130643 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:39:27.130656 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 05:39:27.132584 systemd-networkd[1480]: eth0: Link UP Jul 10 05:39:27.132775 systemd-networkd[1480]: eth0: Gained carrier Jul 10 05:39:27.132799 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:39:27.133158 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 05:39:27.136184 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 05:39:27.146956 systemd-networkd[1480]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 05:39:27.164849 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 05:39:27.169918 systemd-resolved[1421]: Positive Trust Anchors: Jul 10 05:39:27.169936 systemd-resolved[1421]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 05:39:27.169968 systemd-resolved[1421]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 05:39:27.175264 systemd-resolved[1421]: Defaulting to hostname 'linux'. Jul 10 05:39:27.177188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 05:39:27.178466 systemd[1]: Reached target network.target - Network. Jul 10 05:39:27.179378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 05:39:27.185445 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 05:39:27.187022 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 05:39:27.188283 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 05:39:27.189519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 05:39:27.190981 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 05:39:27.192119 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 05:39:27.193501 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 05:39:27.193525 systemd[1]: Reached target paths.target - Path Units. Jul 10 05:39:27.194967 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 05:39:27.196206 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 05:39:28.554208 systemd-timesyncd[1481]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 05:39:28.554253 systemd-timesyncd[1481]: Initial clock synchronization to Thu 2025-07-10 05:39:28.554133 UTC. Jul 10 05:39:28.554288 systemd-resolved[1421]: Clock change detected. Flushing caches. Jul 10 05:39:28.554588 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 05:39:28.555823 systemd[1]: Reached target timers.target - Timer Units. Jul 10 05:39:28.558335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 05:39:28.561253 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 05:39:28.569476 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 05:39:28.571980 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 05:39:28.573748 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 05:39:28.605354 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 05:39:28.606749 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 05:39:28.608680 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 05:39:28.610679 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 05:39:28.612280 systemd[1]: Reached target basic.target - Basic System. Jul 10 05:39:28.613264 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 05:39:28.613290 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 05:39:28.614673 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 05:39:28.617124 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 05:39:28.620605 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 05:39:28.628001 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 05:39:28.631686 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 05:39:28.632745 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 05:39:28.639136 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 05:39:28.643530 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 05:39:28.648616 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 05:39:28.651521 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 05:39:28.655219 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 05:39:28.658550 jq[1535]: false Jul 10 05:39:28.669257 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 05:39:28.676234 extend-filesystems[1536]: Found /dev/vda6 Jul 10 05:39:28.671442 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 05:39:28.673584 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 05:39:28.678641 extend-filesystems[1536]: Found /dev/vda9 Jul 10 05:39:28.675683 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 05:39:28.682004 extend-filesystems[1536]: Checking size of /dev/vda9 Jul 10 05:39:28.677984 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 05:39:28.690748 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing passwd entry cache Jul 10 05:39:28.682240 oslogin_cache_refresh[1537]: Refreshing passwd entry cache Jul 10 05:39:28.694145 extend-filesystems[1536]: Resized partition /dev/vda9 Jul 10 05:39:28.695026 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 05:39:28.697926 extend-filesystems[1559]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 05:39:28.705375 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 05:39:28.705399 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting users, quitting Jul 10 05:39:28.705399 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 05:39:28.705399 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Refreshing group entry cache Jul 10 05:39:28.705399 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Failure getting groups, quitting Jul 10 05:39:28.705399 google_oslogin_nss_cache[1537]: oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 05:39:28.696852 oslogin_cache_refresh[1537]: Failure getting users, quitting Jul 10 05:39:28.697138 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 05:39:28.696873 oslogin_cache_refresh[1537]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 05:39:28.697384 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 05:39:28.696931 oslogin_cache_refresh[1537]: Refreshing group entry cache Jul 10 05:39:28.698350 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 05:39:28.703031 oslogin_cache_refresh[1537]: Failure getting groups, quitting Jul 10 05:39:28.700642 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 05:39:28.703045 oslogin_cache_refresh[1537]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 05:39:28.704444 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 05:39:28.705242 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 05:39:28.710896 update_engine[1549]: I20250710 05:39:28.706450 1549 main.cc:92] Flatcar Update Engine starting Jul 10 05:39:28.718640 kernel: kvm_amd: TSC scaling supported Jul 10 05:39:28.718670 kernel: kvm_amd: Nested Virtualization enabled Jul 10 05:39:28.718683 kernel: kvm_amd: Nested Paging enabled Jul 10 05:39:28.720283 kernel: kvm_amd: LBR virtualization supported Jul 10 05:39:28.720311 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 10 05:39:28.720325 jq[1550]: true Jul 10 05:39:28.720976 kernel: kvm_amd: Virtual GIF supported Jul 10 05:39:28.737482 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 05:39:28.737516 jq[1570]: true Jul 10 05:39:28.743442 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 05:39:28.743925 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 05:39:28.761632 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 05:39:28.761632 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 05:39:28.761632 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 05:39:28.771050 tar[1560]: linux-amd64/LICENSE Jul 10 05:39:28.771050 tar[1560]: linux-amd64/helm Jul 10 05:39:28.771376 extend-filesystems[1536]: Resized filesystem in /dev/vda9 Jul 10 05:39:28.768826 dbus-daemon[1533]: [system] SELinux support is enabled Jul 10 05:39:28.768384 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 05:39:28.777166 update_engine[1549]: I20250710 05:39:28.774787 1549 update_check_scheduler.cc:74] Next update check in 8m28s Jul 10 05:39:28.772569 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 05:39:28.773782 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 05:39:28.774607 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 05:39:28.792473 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 05:39:28.792800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 05:39:28.800720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:39:28.801864 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 05:39:28.801886 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 05:39:28.804009 systemd[1]: Started update-engine.service - Update Engine. Jul 10 05:39:28.808173 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 05:39:28.840558 kernel: EDAC MC: Ver: 3.0.0 Jul 10 05:39:28.848417 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Jul 10 05:39:28.849637 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 05:39:28.853917 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 05:39:28.881641 systemd-logind[1547]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 05:39:28.881878 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 05:39:28.884191 systemd-logind[1547]: New seat seat0. Jul 10 05:39:28.886837 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 05:39:28.916655 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 05:39:28.934699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:39:28.952146 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 05:39:28.975779 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 05:39:28.980120 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 05:39:28.991827 containerd[1580]: time="2025-07-10T05:39:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 05:39:28.993094 containerd[1580]: time="2025-07-10T05:39:28.993051317Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 10 05:39:28.996133 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 05:39:28.996410 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 05:39:29.000067 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 05:39:29.004807 containerd[1580]: time="2025-07-10T05:39:29.004735038Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.656µs" Jul 10 05:39:29.004807 containerd[1580]: time="2025-07-10T05:39:29.004777498Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 05:39:29.004807 containerd[1580]: time="2025-07-10T05:39:29.004796734Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 05:39:29.005010 containerd[1580]: time="2025-07-10T05:39:29.004989966Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 05:39:29.005047 containerd[1580]: time="2025-07-10T05:39:29.005010495Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 05:39:29.005047 containerd[1580]: time="2025-07-10T05:39:29.005036503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005144 containerd[1580]: time="2025-07-10T05:39:29.005122505Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005144 containerd[1580]: time="2025-07-10T05:39:29.005139757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005489 containerd[1580]: time="2025-07-10T05:39:29.005447234Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005489 containerd[1580]: time="2025-07-10T05:39:29.005482981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005547 containerd[1580]: time="2025-07-10T05:39:29.005494402Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005547 containerd[1580]: time="2025-07-10T05:39:29.005502748Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005614 containerd[1580]: time="2025-07-10T05:39:29.005595371Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005849 containerd[1580]: time="2025-07-10T05:39:29.005818981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005880 containerd[1580]: time="2025-07-10T05:39:29.005855299Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 05:39:29.005880 containerd[1580]: time="2025-07-10T05:39:29.005866770Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 05:39:29.005943 containerd[1580]: time="2025-07-10T05:39:29.005910072Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 05:39:29.006762 containerd[1580]: time="2025-07-10T05:39:29.006715572Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 05:39:29.006847 containerd[1580]: time="2025-07-10T05:39:29.006821721Z" level=info msg="metadata content store policy set" policy=shared Jul 10 05:39:29.013108 containerd[1580]: time="2025-07-10T05:39:29.013039392Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 05:39:29.013197 containerd[1580]: time="2025-07-10T05:39:29.013177290Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 05:39:29.013258 containerd[1580]: time="2025-07-10T05:39:29.013244356Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 05:39:29.013318 containerd[1580]: time="2025-07-10T05:39:29.013301363Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 05:39:29.013400 containerd[1580]: time="2025-07-10T05:39:29.013384880Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 05:39:29.013486 containerd[1580]: time="2025-07-10T05:39:29.013454440Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 05:39:29.013541 containerd[1580]: time="2025-07-10T05:39:29.013528389Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 05:39:29.013593 containerd[1580]: time="2025-07-10T05:39:29.013580657Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 05:39:29.013648 containerd[1580]: time="2025-07-10T05:39:29.013635099Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 05:39:29.013701 containerd[1580]: time="2025-07-10T05:39:29.013687277Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 05:39:29.013751 containerd[1580]: time="2025-07-10T05:39:29.013740246Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 05:39:29.013818 containerd[1580]: time="2025-07-10T05:39:29.013804146Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 05:39:29.013991 containerd[1580]: time="2025-07-10T05:39:29.013973694Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 05:39:29.014054 containerd[1580]: time="2025-07-10T05:39:29.014041912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 05:39:29.014108 containerd[1580]: time="2025-07-10T05:39:29.014095542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 05:39:29.014167 containerd[1580]: time="2025-07-10T05:39:29.014152279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 05:39:29.014217 containerd[1580]: time="2025-07-10T05:39:29.014205509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 05:39:29.014277 containerd[1580]: time="2025-07-10T05:39:29.014264519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 05:39:29.014342 containerd[1580]: time="2025-07-10T05:39:29.014326025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 05:39:29.014408 containerd[1580]: time="2025-07-10T05:39:29.014394212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 05:39:29.014485 containerd[1580]: time="2025-07-10T05:39:29.014457511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 05:39:29.014551 containerd[1580]: time="2025-07-10T05:39:29.014537431Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 05:39:29.014601 containerd[1580]: time="2025-07-10T05:39:29.014590040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 05:39:29.014710 containerd[1580]: time="2025-07-10T05:39:29.014695147Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 05:39:29.014763 containerd[1580]: time="2025-07-10T05:39:29.014752595Z" level=info msg="Start snapshots syncer" Jul 10 05:39:29.014834 containerd[1580]: time="2025-07-10T05:39:29.014820201Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 05:39:29.015117 containerd[1580]: time="2025-07-10T05:39:29.015085759Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 05:39:29.015279 containerd[1580]: time="2025-07-10T05:39:29.015262240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 05:39:29.015407 containerd[1580]: time="2025-07-10T05:39:29.015388838Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 05:39:29.015593 containerd[1580]: time="2025-07-10T05:39:29.015574586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 05:39:29.015666 containerd[1580]: time="2025-07-10T05:39:29.015652562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 05:39:29.015727 containerd[1580]: time="2025-07-10T05:39:29.015712815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 05:39:29.015777 containerd[1580]: time="2025-07-10T05:39:29.015765404Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 05:39:29.015842 containerd[1580]: time="2025-07-10T05:39:29.015828242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 05:39:29.015892 containerd[1580]: time="2025-07-10T05:39:29.015880460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 05:39:29.015940 containerd[1580]: time="2025-07-10T05:39:29.015929091Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 05:39:29.016010 containerd[1580]: time="2025-07-10T05:39:29.015994724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 05:39:29.016070 containerd[1580]: time="2025-07-10T05:39:29.016056049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 05:39:29.016120 containerd[1580]: time="2025-07-10T05:39:29.016108387Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 05:39:29.016197 containerd[1580]: time="2025-07-10T05:39:29.016184620Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 05:39:29.016266 containerd[1580]: time="2025-07-10T05:39:29.016248019Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 05:39:29.016319 containerd[1580]: time="2025-07-10T05:39:29.016303373Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 05:39:29.016376 containerd[1580]: time="2025-07-10T05:39:29.016361602Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 05:39:29.016434 containerd[1580]: time="2025-07-10T05:39:29.016411466Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 05:39:29.016517 containerd[1580]: time="2025-07-10T05:39:29.016500893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 05:39:29.016569 containerd[1580]: time="2025-07-10T05:39:29.016557539Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 05:39:29.016639 containerd[1580]: time="2025-07-10T05:39:29.016626719Z" level=info msg="runtime interface created" Jul 10 05:39:29.016682 containerd[1580]: time="2025-07-10T05:39:29.016672044Z" level=info msg="created NRI interface" Jul 10 05:39:29.016729 containerd[1580]: time="2025-07-10T05:39:29.016716718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 05:39:29.016777 containerd[1580]: time="2025-07-10T05:39:29.016766822Z" level=info msg="Connect containerd service" Jul 10 05:39:29.016840 containerd[1580]: time="2025-07-10T05:39:29.016828838Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 05:39:29.018127 containerd[1580]: time="2025-07-10T05:39:29.017644418Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 05:39:29.019139 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 05:39:29.023122 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 05:39:29.025592 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 05:39:29.028735 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 05:39:29.098850 tar[1560]: linux-amd64/README.md Jul 10 05:39:29.110489 containerd[1580]: time="2025-07-10T05:39:29.110411870Z" level=info msg="Start subscribing containerd event" Jul 10 05:39:29.110631 containerd[1580]: time="2025-07-10T05:39:29.110595294Z" level=info msg="Start recovering state" Jul 10 05:39:29.110677 containerd[1580]: time="2025-07-10T05:39:29.110618067Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 05:39:29.110735 containerd[1580]: time="2025-07-10T05:39:29.110719076Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 05:39:29.110765 containerd[1580]: time="2025-07-10T05:39:29.110721290Z" level=info msg="Start event monitor" Jul 10 05:39:29.110787 containerd[1580]: time="2025-07-10T05:39:29.110766425Z" level=info msg="Start cni network conf syncer for default" Jul 10 05:39:29.110787 containerd[1580]: time="2025-07-10T05:39:29.110774109Z" level=info msg="Start streaming server" Jul 10 05:39:29.110787 containerd[1580]: time="2025-07-10T05:39:29.110781804Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 05:39:29.110842 containerd[1580]: time="2025-07-10T05:39:29.110789298Z" level=info msg="runtime interface starting up..." Jul 10 05:39:29.110842 containerd[1580]: time="2025-07-10T05:39:29.110795449Z" level=info msg="starting plugins..." Jul 10 05:39:29.110842 containerd[1580]: time="2025-07-10T05:39:29.110820286Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 05:39:29.111041 containerd[1580]: time="2025-07-10T05:39:29.110983963Z" level=info msg="containerd successfully booted in 0.119754s" Jul 10 05:39:29.111053 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 05:39:29.118723 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 05:39:29.679004 systemd-networkd[1480]: eth0: Gained IPv6LL Jul 10 05:39:29.683612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 05:39:29.685680 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 05:39:29.689127 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 05:39:29.692497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:39:29.695739 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 05:39:29.729353 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 05:39:29.731039 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 05:39:29.731320 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 05:39:29.734071 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 05:39:30.842290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:39:30.844130 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 05:39:30.845564 systemd[1]: Startup finished in 3.589s (kernel) + 6.180s (initrd) + 4.721s (userspace) = 14.492s. Jul 10 05:39:30.846317 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 05:39:31.504892 kubelet[1677]: E0710 05:39:31.504806 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 05:39:31.509399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 05:39:31.509631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 05:39:31.510068 systemd[1]: kubelet.service: Consumed 1.661s CPU time, 264.9M memory peak. Jul 10 05:39:33.203114 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 05:39:33.204436 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:59420.service - OpenSSH per-connection server daemon (10.0.0.1:59420). Jul 10 05:39:33.275218 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 59420 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:39:33.276899 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:39:33.283702 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 05:39:33.284827 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 05:39:33.291121 systemd-logind[1547]: New session 1 of user core. Jul 10 05:39:33.310623 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 05:39:33.313566 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 05:39:33.332898 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 05:39:33.335540 systemd-logind[1547]: New session c1 of user core. Jul 10 05:39:33.470983 systemd[1695]: Queued start job for default target default.target. Jul 10 05:39:33.482674 systemd[1695]: Created slice app.slice - User Application Slice. Jul 10 05:39:33.482698 systemd[1695]: Reached target paths.target - Paths. Jul 10 05:39:33.482736 systemd[1695]: Reached target timers.target - Timers. Jul 10 05:39:33.484214 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 05:39:33.495592 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 05:39:33.495710 systemd[1695]: Reached target sockets.target - Sockets. Jul 10 05:39:33.495750 systemd[1695]: Reached target basic.target - Basic System. Jul 10 05:39:33.495790 systemd[1695]: Reached target default.target - Main User Target. Jul 10 05:39:33.495820 systemd[1695]: Startup finished in 153ms. Jul 10 05:39:33.496118 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 05:39:33.505600 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 05:39:33.572084 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:59434.service - OpenSSH per-connection server daemon (10.0.0.1:59434). Jul 10 05:39:33.618640 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 59434 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:39:33.620165 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:39:33.625225 systemd-logind[1547]: New session 2 of user core. Jul 10 05:39:33.638603 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 05:39:33.693334 sshd[1709]: Connection closed by 10.0.0.1 port 59434 Jul 10 05:39:33.693721 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 10 05:39:33.704526 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:59434.service: Deactivated successfully. Jul 10 05:39:33.706753 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 05:39:33.707614 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Jul 10 05:39:33.711056 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:59444.service - OpenSSH per-connection server daemon (10.0.0.1:59444). Jul 10 05:39:33.711762 systemd-logind[1547]: Removed session 2. Jul 10 05:39:33.773934 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 59444 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:39:33.775256 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:39:33.780195 systemd-logind[1547]: New session 3 of user core. Jul 10 05:39:33.789738 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 05:39:33.840414 sshd[1718]: Connection closed by 10.0.0.1 port 59444 Jul 10 05:39:33.840807 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jul 10 05:39:33.858413 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:59444.service: Deactivated successfully. Jul 10 05:39:33.860193 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 05:39:33.861062 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Jul 10 05:39:33.863860 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:59458.service - OpenSSH per-connection server daemon (10.0.0.1:59458). Jul 10 05:39:33.864443 systemd-logind[1547]: Removed session 3. Jul 10 05:39:33.923608 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 59458 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:39:33.924904 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:39:33.929496 systemd-logind[1547]: New session 4 of user core. Jul 10 05:39:33.940589 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 05:39:33.994386 sshd[1727]: Connection closed by 10.0.0.1 port 59458 Jul 10 05:39:33.994738 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jul 10 05:39:34.008203 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:59458.service: Deactivated successfully. Jul 10 05:39:34.010550 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 05:39:34.011474 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Jul 10 05:39:34.014184 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:59460.service - OpenSSH per-connection server daemon (10.0.0.1:59460). Jul 10 05:39:34.014762 systemd-logind[1547]: Removed session 4. Jul 10 05:39:34.078733 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 59460 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:39:34.080175 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:39:34.084955 systemd-logind[1547]: New session 5 of user core. Jul 10 05:39:34.094609 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 05:39:34.154414 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 05:39:34.154775 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:39:34.176540 sudo[1737]: pam_unix(sudo:session): session closed for user root Jul 10 05:39:34.178339 sshd[1736]: Connection closed by 10.0.0.1 port 59460 Jul 10 05:39:34.178756 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jul 10 05:39:34.203229 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:59460.service: Deactivated successfully. Jul 10 05:39:34.205621 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 05:39:34.206483 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Jul 10 05:39:34.210219 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:59468.service - OpenSSH per-connection server daemon (10.0.0.1:59468). Jul 10 05:39:34.210932 systemd-logind[1547]: Removed session 5. Jul 10 05:39:34.270804 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 59468 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:39:34.272093 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:39:34.276690 systemd-logind[1547]: New session 6 of user core. Jul 10 05:39:34.290589 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 05:39:34.344307 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 05:39:34.344631 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:39:34.473746 sudo[1748]: pam_unix(sudo:session): session closed for user root Jul 10 05:39:34.480166 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 05:39:34.480513 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:39:34.490842 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 05:39:34.541033 augenrules[1770]: No rules Jul 10 05:39:34.542811 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 05:39:34.543111 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 05:39:34.544199 sudo[1747]: pam_unix(sudo:session): session closed for user root Jul 10 05:39:34.545849 sshd[1746]: Connection closed by 10.0.0.1 port 59468 Jul 10 05:39:34.546194 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jul 10 05:39:34.557225 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:59468.service: Deactivated successfully. Jul 10 05:39:34.559145 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 05:39:34.559888 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Jul 10 05:39:34.562816 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:59474.service - OpenSSH per-connection server daemon (10.0.0.1:59474). Jul 10 05:39:34.563355 systemd-logind[1547]: Removed session 6. Jul 10 05:39:34.627955 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 59474 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:39:34.629138 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:39:34.634314 systemd-logind[1547]: New session 7 of user core. Jul 10 05:39:34.649644 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 05:39:34.703230 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 05:39:34.703601 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:39:35.177357 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 05:39:35.194841 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 05:39:35.549841 dockerd[1803]: time="2025-07-10T05:39:35.549684975Z" level=info msg="Starting up" Jul 10 05:39:35.550676 dockerd[1803]: time="2025-07-10T05:39:35.550621261Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 05:39:35.580646 dockerd[1803]: time="2025-07-10T05:39:35.580587529Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 10 05:39:35.841166 dockerd[1803]: time="2025-07-10T05:39:35.841040451Z" level=info msg="Loading containers: start." Jul 10 05:39:35.852510 kernel: Initializing XFRM netlink socket Jul 10 05:39:36.114596 systemd-networkd[1480]: docker0: Link UP Jul 10 05:39:36.119608 dockerd[1803]: time="2025-07-10T05:39:36.119568911Z" level=info msg="Loading containers: done." Jul 10 05:39:36.137451 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck665920399-merged.mount: Deactivated successfully. Jul 10 05:39:36.138306 dockerd[1803]: time="2025-07-10T05:39:36.138248061Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 05:39:36.138362 dockerd[1803]: time="2025-07-10T05:39:36.138344883Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 10 05:39:36.138491 dockerd[1803]: time="2025-07-10T05:39:36.138448928Z" level=info msg="Initializing buildkit" Jul 10 05:39:36.167885 dockerd[1803]: time="2025-07-10T05:39:36.167850817Z" level=info msg="Completed buildkit initialization" Jul 10 05:39:36.174016 dockerd[1803]: time="2025-07-10T05:39:36.173958010Z" level=info msg="Daemon has completed initialization" Jul 10 05:39:36.174141 dockerd[1803]: time="2025-07-10T05:39:36.174037018Z" level=info msg="API listen on /run/docker.sock" Jul 10 05:39:36.174242 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 05:39:37.009742 containerd[1580]: time="2025-07-10T05:39:37.009665109Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 05:39:37.643188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152628881.mount: Deactivated successfully. Jul 10 05:39:38.696661 containerd[1580]: time="2025-07-10T05:39:38.696579666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:38.697278 containerd[1580]: time="2025-07-10T05:39:38.697206582Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 10 05:39:38.698382 containerd[1580]: time="2025-07-10T05:39:38.698349335Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:38.701120 containerd[1580]: time="2025-07-10T05:39:38.701081920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:38.702735 containerd[1580]: time="2025-07-10T05:39:38.702682050Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.692952751s" Jul 10 05:39:38.702808 containerd[1580]: time="2025-07-10T05:39:38.702735761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 10 05:39:38.703570 containerd[1580]: time="2025-07-10T05:39:38.703540530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 05:39:39.930631 containerd[1580]: time="2025-07-10T05:39:39.930555461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:39.931265 containerd[1580]: time="2025-07-10T05:39:39.931241298Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 10 05:39:39.932452 containerd[1580]: time="2025-07-10T05:39:39.932396714Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:39.934833 containerd[1580]: time="2025-07-10T05:39:39.934794701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:39.935872 containerd[1580]: time="2025-07-10T05:39:39.935837687Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.232267241s" Jul 10 05:39:39.935872 containerd[1580]: time="2025-07-10T05:39:39.935870629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 10 05:39:39.936406 containerd[1580]: time="2025-07-10T05:39:39.936364826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 05:39:41.540799 containerd[1580]: time="2025-07-10T05:39:41.540723792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:41.541687 containerd[1580]: time="2025-07-10T05:39:41.541661881Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 10 05:39:41.542773 containerd[1580]: time="2025-07-10T05:39:41.542721408Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:41.545479 containerd[1580]: time="2025-07-10T05:39:41.545412054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:41.546487 containerd[1580]: time="2025-07-10T05:39:41.546411158Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.610014602s" Jul 10 05:39:41.546487 containerd[1580]: time="2025-07-10T05:39:41.546454489Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 10 05:39:41.547057 containerd[1580]: time="2025-07-10T05:39:41.547018957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 05:39:41.760057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 05:39:41.762062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:39:42.018484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:39:42.035742 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 05:39:42.115277 kubelet[2091]: E0710 05:39:42.115165 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 05:39:42.121528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 05:39:42.121726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 05:39:42.122129 systemd[1]: kubelet.service: Consumed 302ms CPU time, 111.1M memory peak. Jul 10 05:39:42.831988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4080216685.mount: Deactivated successfully. Jul 10 05:39:43.496449 containerd[1580]: time="2025-07-10T05:39:43.496348185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:43.497144 containerd[1580]: time="2025-07-10T05:39:43.497091920Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 10 05:39:43.498412 containerd[1580]: time="2025-07-10T05:39:43.498338177Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:43.500308 containerd[1580]: time="2025-07-10T05:39:43.500261585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:43.500855 containerd[1580]: time="2025-07-10T05:39:43.500819170Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.953771429s" Jul 10 05:39:43.500855 containerd[1580]: time="2025-07-10T05:39:43.500847624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 10 05:39:43.501501 containerd[1580]: time="2025-07-10T05:39:43.501447579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 05:39:45.629003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926150264.mount: Deactivated successfully. Jul 10 05:39:46.744002 containerd[1580]: time="2025-07-10T05:39:46.743915944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:46.773698 containerd[1580]: time="2025-07-10T05:39:46.773619810Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 05:39:46.787955 containerd[1580]: time="2025-07-10T05:39:46.787904879Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:46.804688 containerd[1580]: time="2025-07-10T05:39:46.804646155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:46.807540 containerd[1580]: time="2025-07-10T05:39:46.806039859Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.304525946s" Jul 10 05:39:46.807540 containerd[1580]: time="2025-07-10T05:39:46.806099390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 05:39:46.807858 containerd[1580]: time="2025-07-10T05:39:46.807826970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 05:39:47.292917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984104139.mount: Deactivated successfully. Jul 10 05:39:47.299824 containerd[1580]: time="2025-07-10T05:39:47.299783424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 05:39:47.300555 containerd[1580]: time="2025-07-10T05:39:47.300536136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 05:39:47.301813 containerd[1580]: time="2025-07-10T05:39:47.301789326Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 05:39:47.304290 containerd[1580]: time="2025-07-10T05:39:47.304235143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 05:39:47.305188 containerd[1580]: time="2025-07-10T05:39:47.305139198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.276852ms" Jul 10 05:39:47.305188 containerd[1580]: time="2025-07-10T05:39:47.305184653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 05:39:47.305761 containerd[1580]: time="2025-07-10T05:39:47.305701743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 05:39:47.848019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242045218.mount: Deactivated successfully. Jul 10 05:39:49.699996 containerd[1580]: time="2025-07-10T05:39:49.699862589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:49.700907 containerd[1580]: time="2025-07-10T05:39:49.700781132Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 10 05:39:49.702187 containerd[1580]: time="2025-07-10T05:39:49.702131795Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:49.706805 containerd[1580]: time="2025-07-10T05:39:49.706671619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:39:49.708203 containerd[1580]: time="2025-07-10T05:39:49.708105668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.402366465s" Jul 10 05:39:49.708203 containerd[1580]: time="2025-07-10T05:39:49.708197570Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 10 05:39:51.882744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:39:51.882923 systemd[1]: kubelet.service: Consumed 302ms CPU time, 111.1M memory peak. Jul 10 05:39:51.885069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:39:51.908507 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Jul 10 05:39:51.908526 systemd[1]: Reloading... Jul 10 05:39:51.996617 zram_generator::config[2293]: No configuration found. Jul 10 05:39:52.169931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:39:52.289406 systemd[1]: Reloading finished in 380 ms. Jul 10 05:39:52.360196 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 05:39:52.360303 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 05:39:52.360623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:39:52.360664 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.3M memory peak. Jul 10 05:39:52.362195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:39:52.534068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:39:52.538053 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 05:39:52.581063 kubelet[2338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:39:52.581063 kubelet[2338]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 05:39:52.581063 kubelet[2338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:39:52.581434 kubelet[2338]: I0710 05:39:52.581180 2338 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 05:39:52.858279 kubelet[2338]: I0710 05:39:52.858153 2338 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 05:39:52.858279 kubelet[2338]: I0710 05:39:52.858196 2338 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 05:39:52.858506 kubelet[2338]: I0710 05:39:52.858490 2338 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 05:39:52.886944 kubelet[2338]: E0710 05:39:52.886892 2338 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:52.887078 kubelet[2338]: I0710 05:39:52.887047 2338 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 05:39:52.898186 kubelet[2338]: I0710 05:39:52.898151 2338 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 05:39:52.904103 kubelet[2338]: I0710 05:39:52.904066 2338 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 05:39:52.906731 kubelet[2338]: I0710 05:39:52.906681 2338 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 05:39:52.906973 kubelet[2338]: I0710 05:39:52.906720 2338 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 05:39:52.907235 kubelet[2338]: I0710 05:39:52.906988 2338 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 05:39:52.907235 kubelet[2338]: I0710 05:39:52.907003 2338 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 05:39:52.907235 kubelet[2338]: I0710 05:39:52.907210 2338 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:39:52.910841 kubelet[2338]: I0710 05:39:52.910806 2338 kubelet.go:446] "Attempting to sync node with API server" Jul 10 05:39:52.910900 kubelet[2338]: I0710 05:39:52.910845 2338 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 05:39:52.910900 kubelet[2338]: I0710 05:39:52.910895 2338 kubelet.go:352] "Adding apiserver pod source" Jul 10 05:39:52.910972 kubelet[2338]: I0710 05:39:52.910922 2338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 05:39:52.915291 kubelet[2338]: I0710 05:39:52.915195 2338 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 05:39:52.915697 kubelet[2338]: I0710 05:39:52.915660 2338 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 05:39:52.916918 kubelet[2338]: W0710 05:39:52.916809 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:52.916918 kubelet[2338]: W0710 05:39:52.916813 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:52.916918 kubelet[2338]: E0710 05:39:52.916890 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:52.916918 kubelet[2338]: E0710 05:39:52.916915 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:52.917144 kubelet[2338]: W0710 05:39:52.917103 2338 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 05:39:52.924365 kubelet[2338]: I0710 05:39:52.922139 2338 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 05:39:52.924365 kubelet[2338]: I0710 05:39:52.922199 2338 server.go:1287] "Started kubelet" Jul 10 05:39:52.924365 kubelet[2338]: I0710 05:39:52.922685 2338 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 05:39:52.924768 kubelet[2338]: I0710 05:39:52.924734 2338 server.go:479] "Adding debug handlers to kubelet server" Jul 10 05:39:52.927368 kubelet[2338]: I0710 05:39:52.927167 2338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 05:39:52.927631 kubelet[2338]: I0710 05:39:52.927568 2338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 05:39:52.927953 kubelet[2338]: I0710 05:39:52.927897 2338 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 05:39:52.928346 kubelet[2338]: I0710 05:39:52.928324 2338 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 05:39:52.928501 kubelet[2338]: E0710 05:39:52.928440 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:52.928729 kubelet[2338]: I0710 05:39:52.928711 2338 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 05:39:52.928799 kubelet[2338]: I0710 05:39:52.928785 2338 reconciler.go:26] "Reconciler: start to sync state" Jul 10 05:39:52.929396 kubelet[2338]: I0710 05:39:52.928979 2338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 05:39:52.930107 kubelet[2338]: W0710 05:39:52.930066 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:52.930220 kubelet[2338]: E0710 05:39:52.930115 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:52.930374 kubelet[2338]: E0710 05:39:52.930330 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Jul 10 05:39:52.931806 kubelet[2338]: E0710 05:39:52.931779 2338 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 05:39:52.931927 kubelet[2338]: I0710 05:39:52.931885 2338 factory.go:221] Registration of the containerd container factory successfully Jul 10 05:39:52.931927 kubelet[2338]: I0710 05:39:52.931894 2338 factory.go:221] Registration of the systemd container factory successfully Jul 10 05:39:52.931974 kubelet[2338]: I0710 05:39:52.931965 2338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 05:39:52.932098 kubelet[2338]: E0710 05:39:52.929514 2338 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850cd4eb1dac9e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 05:39:52.922171879 +0000 UTC m=+0.375966112,LastTimestamp:2025-07-10 05:39:52.922171879 +0000 UTC m=+0.375966112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 05:39:52.944607 kubelet[2338]: I0710 05:39:52.944583 2338 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 05:39:52.944607 kubelet[2338]: I0710 05:39:52.944597 2338 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 05:39:52.944806 kubelet[2338]: I0710 05:39:52.944788 2338 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:39:52.948608 kubelet[2338]: I0710 05:39:52.948531 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 05:39:52.950057 kubelet[2338]: I0710 05:39:52.950030 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 05:39:52.950158 kubelet[2338]: I0710 05:39:52.950147 2338 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 05:39:52.950391 kubelet[2338]: I0710 05:39:52.950377 2338 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 05:39:52.950850 kubelet[2338]: I0710 05:39:52.950816 2338 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 05:39:52.951053 kubelet[2338]: E0710 05:39:52.951018 2338 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 05:39:52.951407 kubelet[2338]: W0710 05:39:52.951346 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:52.951453 kubelet[2338]: E0710 05:39:52.951431 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:53.028944 kubelet[2338]: E0710 05:39:53.028905 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:53.052141 kubelet[2338]: E0710 05:39:53.052100 2338 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 05:39:53.129065 kubelet[2338]: E0710 05:39:53.128985 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:53.131610 kubelet[2338]: E0710 05:39:53.131573 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Jul 10 05:39:53.229820 kubelet[2338]: E0710 05:39:53.229757 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:53.252549 kubelet[2338]: I0710 05:39:53.252499 2338 policy_none.go:49] "None policy: Start" Jul 10 05:39:53.252614 kubelet[2338]: I0710 05:39:53.252560 2338 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 05:39:53.252614 kubelet[2338]: I0710 05:39:53.252594 2338 state_mem.go:35] "Initializing new in-memory state store" Jul 10 05:39:53.252682 kubelet[2338]: E0710 05:39:53.252506 2338 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 05:39:53.260373 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 05:39:53.277582 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 05:39:53.280640 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 05:39:53.291400 kubelet[2338]: I0710 05:39:53.291357 2338 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 05:39:53.291635 kubelet[2338]: I0710 05:39:53.291616 2338 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 05:39:53.291717 kubelet[2338]: I0710 05:39:53.291635 2338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 05:39:53.291905 kubelet[2338]: I0710 05:39:53.291878 2338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 05:39:53.293125 kubelet[2338]: E0710 05:39:53.293095 2338 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 05:39:53.293200 kubelet[2338]: E0710 05:39:53.293173 2338 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 05:39:53.393932 kubelet[2338]: I0710 05:39:53.393831 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:39:53.394360 kubelet[2338]: E0710 05:39:53.394325 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 10 05:39:53.532927 kubelet[2338]: E0710 05:39:53.532849 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Jul 10 05:39:53.595999 kubelet[2338]: I0710 05:39:53.595973 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:39:53.596391 kubelet[2338]: E0710 05:39:53.596327 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 10 05:39:53.662499 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 10 05:39:53.685009 kubelet[2338]: E0710 05:39:53.684974 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:53.688022 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 10 05:39:53.703695 kubelet[2338]: E0710 05:39:53.703661 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:53.706265 systemd[1]: Created slice kubepods-burstable-podbccbde3e573d232642a7b8a29dcb372e.slice - libcontainer container kubepods-burstable-podbccbde3e573d232642a7b8a29dcb372e.slice. Jul 10 05:39:53.708094 kubelet[2338]: E0710 05:39:53.708070 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:53.733480 kubelet[2338]: I0710 05:39:53.733435 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:53.733543 kubelet[2338]: I0710 05:39:53.733486 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:53.733543 kubelet[2338]: I0710 05:39:53.733519 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bccbde3e573d232642a7b8a29dcb372e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bccbde3e573d232642a7b8a29dcb372e\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:53.733543 kubelet[2338]: I0710 05:39:53.733535 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bccbde3e573d232642a7b8a29dcb372e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bccbde3e573d232642a7b8a29dcb372e\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:53.733614 kubelet[2338]: I0710 05:39:53.733549 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bccbde3e573d232642a7b8a29dcb372e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bccbde3e573d232642a7b8a29dcb372e\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:53.733614 kubelet[2338]: I0710 05:39:53.733564 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:53.733614 kubelet[2338]: I0710 05:39:53.733582 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:53.733614 kubelet[2338]: I0710 05:39:53.733598 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:53.733614 kubelet[2338]: I0710 05:39:53.733613 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 05:39:53.882309 kubelet[2338]: W0710 05:39:53.882243 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:53.882309 kubelet[2338]: E0710 05:39:53.882303 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:53.906957 kubelet[2338]: W0710 05:39:53.906921 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:53.906957 kubelet[2338]: E0710 05:39:53.906946 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:53.986958 containerd[1580]: time="2025-07-10T05:39:53.986842317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 10 05:39:53.997943 kubelet[2338]: I0710 05:39:53.997917 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:39:53.998312 kubelet[2338]: E0710 05:39:53.998257 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 10 05:39:54.004897 containerd[1580]: time="2025-07-10T05:39:54.004821895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 10 05:39:54.009370 containerd[1580]: time="2025-07-10T05:39:54.009315011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bccbde3e573d232642a7b8a29dcb372e,Namespace:kube-system,Attempt:0,}" Jul 10 05:39:54.015965 containerd[1580]: time="2025-07-10T05:39:54.015935306Z" level=info msg="connecting to shim 8cd36ec377bc235783dd9da8fcd969bd1bfe0d95f218330c409274acf2513c82" address="unix:///run/containerd/s/9c761cff9c9ccbd4a9cabde773fb578df0a0dcff4cb81765919dd7dc4207c535" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:39:54.046551 containerd[1580]: time="2025-07-10T05:39:54.046498874Z" level=info msg="connecting to shim b3aa4527eb5462b8ac4ee032167b7a4f97eb3d85ba2965af5e3e62db3f3cb73e" address="unix:///run/containerd/s/0fc9dbb4ab91a2b6493256811c553b94f71f64888e4094301bde4bbe696ad71f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:39:54.052389 containerd[1580]: time="2025-07-10T05:39:54.051700258Z" level=info msg="connecting to shim 1369a78bb41fe92e642a586e4e5513279562c888da48333fda279fe1fddea379" address="unix:///run/containerd/s/3bc9b22a4b62e8e390a45a4526bba8cf718a2b9ad320e8722407930a552eafe6" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:39:54.059619 systemd[1]: Started cri-containerd-8cd36ec377bc235783dd9da8fcd969bd1bfe0d95f218330c409274acf2513c82.scope - libcontainer container 8cd36ec377bc235783dd9da8fcd969bd1bfe0d95f218330c409274acf2513c82. Jul 10 05:39:54.088597 systemd[1]: Started cri-containerd-b3aa4527eb5462b8ac4ee032167b7a4f97eb3d85ba2965af5e3e62db3f3cb73e.scope - libcontainer container b3aa4527eb5462b8ac4ee032167b7a4f97eb3d85ba2965af5e3e62db3f3cb73e. Jul 10 05:39:54.093172 systemd[1]: Started cri-containerd-1369a78bb41fe92e642a586e4e5513279562c888da48333fda279fe1fddea379.scope - libcontainer container 1369a78bb41fe92e642a586e4e5513279562c888da48333fda279fe1fddea379. Jul 10 05:39:54.116128 containerd[1580]: time="2025-07-10T05:39:54.116089261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cd36ec377bc235783dd9da8fcd969bd1bfe0d95f218330c409274acf2513c82\"" Jul 10 05:39:54.120189 containerd[1580]: time="2025-07-10T05:39:54.119869199Z" level=info msg="CreateContainer within sandbox \"8cd36ec377bc235783dd9da8fcd969bd1bfe0d95f218330c409274acf2513c82\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 05:39:54.129316 containerd[1580]: time="2025-07-10T05:39:54.129272754Z" level=info msg="Container 721e5773af644e7c59b20bc12a749564a1eb6f7e2f93c90199f3023f54d5a461: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:39:54.136165 containerd[1580]: time="2025-07-10T05:39:54.136105098Z" level=info msg="CreateContainer within sandbox \"8cd36ec377bc235783dd9da8fcd969bd1bfe0d95f218330c409274acf2513c82\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"721e5773af644e7c59b20bc12a749564a1eb6f7e2f93c90199f3023f54d5a461\"" Jul 10 05:39:54.138261 containerd[1580]: time="2025-07-10T05:39:54.138240572Z" level=info msg="StartContainer for \"721e5773af644e7c59b20bc12a749564a1eb6f7e2f93c90199f3023f54d5a461\"" Jul 10 05:39:54.139446 containerd[1580]: time="2025-07-10T05:39:54.139424663Z" level=info msg="connecting to shim 721e5773af644e7c59b20bc12a749564a1eb6f7e2f93c90199f3023f54d5a461" address="unix:///run/containerd/s/9c761cff9c9ccbd4a9cabde773fb578df0a0dcff4cb81765919dd7dc4207c535" protocol=ttrpc version=3 Jul 10 05:39:54.146925 containerd[1580]: time="2025-07-10T05:39:54.146834900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3aa4527eb5462b8ac4ee032167b7a4f97eb3d85ba2965af5e3e62db3f3cb73e\"" Jul 10 05:39:54.150497 containerd[1580]: time="2025-07-10T05:39:54.149924524Z" level=info msg="CreateContainer within sandbox \"b3aa4527eb5462b8ac4ee032167b7a4f97eb3d85ba2965af5e3e62db3f3cb73e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 05:39:54.155811 containerd[1580]: time="2025-07-10T05:39:54.155771639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bccbde3e573d232642a7b8a29dcb372e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1369a78bb41fe92e642a586e4e5513279562c888da48333fda279fe1fddea379\"" Jul 10 05:39:54.158888 containerd[1580]: time="2025-07-10T05:39:54.158865151Z" level=info msg="CreateContainer within sandbox \"1369a78bb41fe92e642a586e4e5513279562c888da48333fda279fe1fddea379\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 05:39:54.164739 systemd[1]: Started cri-containerd-721e5773af644e7c59b20bc12a749564a1eb6f7e2f93c90199f3023f54d5a461.scope - libcontainer container 721e5773af644e7c59b20bc12a749564a1eb6f7e2f93c90199f3023f54d5a461. Jul 10 05:39:54.167894 containerd[1580]: time="2025-07-10T05:39:54.167221672Z" level=info msg="Container 003a9b55dd66d39e3a91256d9d21ba8441e6decd39a3bc4a63a0c970ced67f58: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:39:54.170000 containerd[1580]: time="2025-07-10T05:39:54.169961410Z" level=info msg="Container b04389dffb8d56e7b4f17c77837dc560ced9c07379d059386409ff0088f5969e: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:39:54.174604 containerd[1580]: time="2025-07-10T05:39:54.174570594Z" level=info msg="CreateContainer within sandbox \"b3aa4527eb5462b8ac4ee032167b7a4f97eb3d85ba2965af5e3e62db3f3cb73e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"003a9b55dd66d39e3a91256d9d21ba8441e6decd39a3bc4a63a0c970ced67f58\"" Jul 10 05:39:54.175108 containerd[1580]: time="2025-07-10T05:39:54.175016601Z" level=info msg="StartContainer for \"003a9b55dd66d39e3a91256d9d21ba8441e6decd39a3bc4a63a0c970ced67f58\"" Jul 10 05:39:54.176326 containerd[1580]: time="2025-07-10T05:39:54.176294026Z" level=info msg="connecting to shim 003a9b55dd66d39e3a91256d9d21ba8441e6decd39a3bc4a63a0c970ced67f58" address="unix:///run/containerd/s/0fc9dbb4ab91a2b6493256811c553b94f71f64888e4094301bde4bbe696ad71f" protocol=ttrpc version=3 Jul 10 05:39:54.180182 containerd[1580]: time="2025-07-10T05:39:54.180106215Z" level=info msg="CreateContainer within sandbox \"1369a78bb41fe92e642a586e4e5513279562c888da48333fda279fe1fddea379\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b04389dffb8d56e7b4f17c77837dc560ced9c07379d059386409ff0088f5969e\"" Jul 10 05:39:54.180545 containerd[1580]: time="2025-07-10T05:39:54.180525682Z" level=info msg="StartContainer for \"b04389dffb8d56e7b4f17c77837dc560ced9c07379d059386409ff0088f5969e\"" Jul 10 05:39:54.181579 containerd[1580]: time="2025-07-10T05:39:54.181556876Z" level=info msg="connecting to shim b04389dffb8d56e7b4f17c77837dc560ced9c07379d059386409ff0088f5969e" address="unix:///run/containerd/s/3bc9b22a4b62e8e390a45a4526bba8cf718a2b9ad320e8722407930a552eafe6" protocol=ttrpc version=3 Jul 10 05:39:54.264882 kubelet[2338]: W0710 05:39:54.263921 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:54.264882 kubelet[2338]: E0710 05:39:54.264009 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:54.269598 systemd[1]: Started cri-containerd-b04389dffb8d56e7b4f17c77837dc560ced9c07379d059386409ff0088f5969e.scope - libcontainer container b04389dffb8d56e7b4f17c77837dc560ced9c07379d059386409ff0088f5969e. Jul 10 05:39:54.272953 systemd[1]: Started cri-containerd-003a9b55dd66d39e3a91256d9d21ba8441e6decd39a3bc4a63a0c970ced67f58.scope - libcontainer container 003a9b55dd66d39e3a91256d9d21ba8441e6decd39a3bc4a63a0c970ced67f58. Jul 10 05:39:54.304317 containerd[1580]: time="2025-07-10T05:39:54.304277681Z" level=info msg="StartContainer for \"721e5773af644e7c59b20bc12a749564a1eb6f7e2f93c90199f3023f54d5a461\" returns successfully" Jul 10 05:39:54.334400 kubelet[2338]: E0710 05:39:54.334355 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Jul 10 05:39:54.347144 containerd[1580]: time="2025-07-10T05:39:54.347105970Z" level=info msg="StartContainer for \"b04389dffb8d56e7b4f17c77837dc560ced9c07379d059386409ff0088f5969e\" returns successfully" Jul 10 05:39:54.348865 containerd[1580]: time="2025-07-10T05:39:54.348828320Z" level=info msg="StartContainer for \"003a9b55dd66d39e3a91256d9d21ba8441e6decd39a3bc4a63a0c970ced67f58\" returns successfully" Jul 10 05:39:54.359926 kubelet[2338]: W0710 05:39:54.359814 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 10 05:39:54.360027 kubelet[2338]: E0710 05:39:54.360002 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:39:54.800274 kubelet[2338]: I0710 05:39:54.800234 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:39:54.959368 kubelet[2338]: E0710 05:39:54.959315 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:54.963269 kubelet[2338]: E0710 05:39:54.963236 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:54.964281 kubelet[2338]: E0710 05:39:54.964252 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:55.710410 kubelet[2338]: I0710 05:39:55.710353 2338 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 05:39:55.710410 kubelet[2338]: E0710 05:39:55.710408 2338 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 05:39:55.745314 kubelet[2338]: E0710 05:39:55.745278 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:55.845880 kubelet[2338]: E0710 05:39:55.845786 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:55.945978 kubelet[2338]: E0710 05:39:55.945931 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:55.966784 kubelet[2338]: E0710 05:39:55.966667 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:55.966784 kubelet[2338]: E0710 05:39:55.966756 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:39:56.046665 kubelet[2338]: E0710 05:39:56.046624 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:56.147349 kubelet[2338]: E0710 05:39:56.147280 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:56.248571 kubelet[2338]: E0710 05:39:56.248406 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:39:56.429243 kubelet[2338]: I0710 05:39:56.429200 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:56.434169 kubelet[2338]: E0710 05:39:56.434106 2338 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:56.434169 kubelet[2338]: I0710 05:39:56.434144 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:56.435868 kubelet[2338]: E0710 05:39:56.435828 2338 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:56.435868 kubelet[2338]: I0710 05:39:56.435846 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 05:39:56.437075 kubelet[2338]: E0710 05:39:56.437045 2338 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 05:39:56.929577 kubelet[2338]: I0710 05:39:56.929516 2338 apiserver.go:52] "Watching apiserver" Jul 10 05:39:56.966523 kubelet[2338]: I0710 05:39:56.966487 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 05:39:57.028986 kubelet[2338]: I0710 05:39:57.028916 2338 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 05:39:57.098887 kubelet[2338]: I0710 05:39:57.098820 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:57.484628 systemd[1]: Reload requested from client PID 2616 ('systemctl') (unit session-7.scope)... Jul 10 05:39:57.484646 systemd[1]: Reloading... Jul 10 05:39:57.566504 zram_generator::config[2659]: No configuration found. Jul 10 05:39:57.717668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:39:57.850321 systemd[1]: Reloading finished in 365 ms. Jul 10 05:39:57.881962 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:39:57.903908 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 05:39:57.904224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:39:57.904284 systemd[1]: kubelet.service: Consumed 849ms CPU time, 132.2M memory peak. Jul 10 05:39:57.906254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:39:58.127295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:39:58.131708 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 05:39:58.191278 kubelet[2704]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:39:58.191278 kubelet[2704]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 05:39:58.191278 kubelet[2704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:39:58.192196 kubelet[2704]: I0710 05:39:58.191361 2704 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 05:39:58.197790 kubelet[2704]: I0710 05:39:58.197733 2704 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 05:39:58.197790 kubelet[2704]: I0710 05:39:58.197769 2704 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 05:39:58.198051 kubelet[2704]: I0710 05:39:58.198035 2704 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 05:39:58.199296 kubelet[2704]: I0710 05:39:58.199252 2704 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 05:39:58.236670 kubelet[2704]: I0710 05:39:58.236613 2704 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 05:39:58.244369 kubelet[2704]: I0710 05:39:58.244328 2704 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 05:39:58.251696 kubelet[2704]: I0710 05:39:58.251656 2704 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 05:39:58.252059 kubelet[2704]: I0710 05:39:58.251973 2704 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 05:39:58.252259 kubelet[2704]: I0710 05:39:58.252040 2704 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 05:39:58.252259 kubelet[2704]: I0710 05:39:58.252251 2704 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 05:39:58.252259 kubelet[2704]: I0710 05:39:58.252262 2704 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 05:39:58.252541 kubelet[2704]: I0710 05:39:58.252320 2704 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:39:58.252541 kubelet[2704]: I0710 05:39:58.252511 2704 kubelet.go:446] "Attempting to sync node with API server" Jul 10 05:39:58.252541 kubelet[2704]: I0710 05:39:58.252534 2704 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 05:39:58.252620 kubelet[2704]: I0710 05:39:58.252557 2704 kubelet.go:352] "Adding apiserver pod source" Jul 10 05:39:58.252620 kubelet[2704]: I0710 05:39:58.252568 2704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 05:39:58.256489 kubelet[2704]: I0710 05:39:58.255440 2704 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 05:39:58.256489 kubelet[2704]: I0710 05:39:58.255882 2704 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 05:39:58.256489 kubelet[2704]: I0710 05:39:58.256347 2704 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 05:39:58.256489 kubelet[2704]: I0710 05:39:58.256376 2704 server.go:1287] "Started kubelet" Jul 10 05:39:58.259484 kubelet[2704]: I0710 05:39:58.258011 2704 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 05:39:58.259484 kubelet[2704]: I0710 05:39:58.258226 2704 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 05:39:58.259484 kubelet[2704]: I0710 05:39:58.258329 2704 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 05:39:58.259484 kubelet[2704]: I0710 05:39:58.259373 2704 server.go:479] "Adding debug handlers to kubelet server" Jul 10 05:39:58.262166 kubelet[2704]: I0710 05:39:58.262031 2704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 05:39:58.265222 kubelet[2704]: E0710 05:39:58.265102 2704 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 05:39:58.266096 kubelet[2704]: I0710 05:39:58.266079 2704 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 05:39:58.269984 kubelet[2704]: I0710 05:39:58.269954 2704 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 05:39:58.270103 kubelet[2704]: I0710 05:39:58.270082 2704 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 05:39:58.270562 kubelet[2704]: I0710 05:39:58.270539 2704 reconciler.go:26] "Reconciler: start to sync state" Jul 10 05:39:58.271228 kubelet[2704]: I0710 05:39:58.271199 2704 factory.go:221] Registration of the systemd container factory successfully Jul 10 05:39:58.271322 kubelet[2704]: I0710 05:39:58.271296 2704 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 05:39:58.273103 kubelet[2704]: I0710 05:39:58.273071 2704 factory.go:221] Registration of the containerd container factory successfully Jul 10 05:39:58.276874 kubelet[2704]: I0710 05:39:58.276810 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 05:39:58.278018 kubelet[2704]: I0710 05:39:58.277996 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 05:39:58.278018 kubelet[2704]: I0710 05:39:58.278016 2704 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 05:39:58.278096 kubelet[2704]: I0710 05:39:58.278040 2704 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 05:39:58.278096 kubelet[2704]: I0710 05:39:58.278047 2704 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 05:39:58.278142 kubelet[2704]: E0710 05:39:58.278095 2704 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 05:39:58.312443 kubelet[2704]: I0710 05:39:58.312394 2704 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 05:39:58.312443 kubelet[2704]: I0710 05:39:58.312415 2704 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 05:39:58.312443 kubelet[2704]: I0710 05:39:58.312434 2704 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:39:58.312636 kubelet[2704]: I0710 05:39:58.312603 2704 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 05:39:58.312636 kubelet[2704]: I0710 05:39:58.312613 2704 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 05:39:58.312636 kubelet[2704]: I0710 05:39:58.312635 2704 policy_none.go:49] "None policy: Start" Jul 10 05:39:58.312701 kubelet[2704]: I0710 05:39:58.312645 2704 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 05:39:58.312701 kubelet[2704]: I0710 05:39:58.312657 2704 state_mem.go:35] "Initializing new in-memory state store" Jul 10 05:39:58.312769 kubelet[2704]: I0710 05:39:58.312756 2704 state_mem.go:75] "Updated machine memory state" Jul 10 05:39:58.316867 kubelet[2704]: I0710 05:39:58.316849 2704 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 05:39:58.317215 kubelet[2704]: I0710 05:39:58.317076 2704 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 05:39:58.317215 kubelet[2704]: I0710 05:39:58.317087 2704 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 05:39:58.317893 kubelet[2704]: I0710 05:39:58.317873 2704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 05:39:58.320139 kubelet[2704]: E0710 05:39:58.319856 2704 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 05:39:58.379398 kubelet[2704]: I0710 05:39:58.378644 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:58.379398 kubelet[2704]: I0710 05:39:58.378695 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 05:39:58.379398 kubelet[2704]: I0710 05:39:58.378967 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:58.384780 kubelet[2704]: E0710 05:39:58.384727 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 05:39:58.384840 kubelet[2704]: E0710 05:39:58.384784 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:58.420964 kubelet[2704]: I0710 05:39:58.420933 2704 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:39:58.427574 kubelet[2704]: I0710 05:39:58.427553 2704 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 05:39:58.427634 kubelet[2704]: I0710 05:39:58.427621 2704 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 05:39:58.471668 kubelet[2704]: I0710 05:39:58.471608 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bccbde3e573d232642a7b8a29dcb372e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bccbde3e573d232642a7b8a29dcb372e\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:58.471668 kubelet[2704]: I0710 05:39:58.471654 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bccbde3e573d232642a7b8a29dcb372e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bccbde3e573d232642a7b8a29dcb372e\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:58.471871 kubelet[2704]: I0710 05:39:58.471682 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:58.471871 kubelet[2704]: I0710 05:39:58.471702 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:58.471871 kubelet[2704]: I0710 05:39:58.471719 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:58.471871 kubelet[2704]: I0710 05:39:58.471733 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 05:39:58.471871 kubelet[2704]: I0710 05:39:58.471759 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bccbde3e573d232642a7b8a29dcb372e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bccbde3e573d232642a7b8a29dcb372e\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:58.472033 kubelet[2704]: I0710 05:39:58.471779 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:58.472033 kubelet[2704]: I0710 05:39:58.471800 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:59.253035 kubelet[2704]: I0710 05:39:59.253008 2704 apiserver.go:52] "Watching apiserver" Jul 10 05:39:59.270601 kubelet[2704]: I0710 05:39:59.270545 2704 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 05:39:59.295378 kubelet[2704]: I0710 05:39:59.295233 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:59.295378 kubelet[2704]: I0710 05:39:59.295260 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:59.346090 kubelet[2704]: E0710 05:39:59.346046 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:39:59.346390 kubelet[2704]: E0710 05:39:59.346352 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 05:39:59.363717 kubelet[2704]: I0710 05:39:59.363639 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.36359096 podStartE2EDuration="1.36359096s" podCreationTimestamp="2025-07-10 05:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:39:59.36322319 +0000 UTC m=+1.225038676" watchObservedRunningTime="2025-07-10 05:39:59.36359096 +0000 UTC m=+1.225406456" Jul 10 05:39:59.402802 kubelet[2704]: I0710 05:39:59.401445 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.401420403 podStartE2EDuration="3.401420403s" podCreationTimestamp="2025-07-10 05:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:39:59.401420493 +0000 UTC m=+1.263235989" watchObservedRunningTime="2025-07-10 05:39:59.401420403 +0000 UTC m=+1.263235879" Jul 10 05:39:59.439223 kubelet[2704]: I0710 05:39:59.439062 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.439034473 podStartE2EDuration="2.439034473s" podCreationTimestamp="2025-07-10 05:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:39:59.418833609 +0000 UTC m=+1.280649095" watchObservedRunningTime="2025-07-10 05:39:59.439034473 +0000 UTC m=+1.300849959" Jul 10 05:40:02.730535 kubelet[2704]: I0710 05:40:02.730489 2704 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 05:40:02.731101 containerd[1580]: time="2025-07-10T05:40:02.730918919Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 05:40:02.731338 kubelet[2704]: I0710 05:40:02.731200 2704 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 05:40:03.710912 systemd[1]: Created slice kubepods-besteffort-pod5f0667c7_c8ab_4e30_ada0_e3ba31f15543.slice - libcontainer container kubepods-besteffort-pod5f0667c7_c8ab_4e30_ada0_e3ba31f15543.slice. Jul 10 05:40:03.802523 kubelet[2704]: I0710 05:40:03.802434 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f0667c7-c8ab-4e30-ada0-e3ba31f15543-xtables-lock\") pod \"kube-proxy-sndt7\" (UID: \"5f0667c7-c8ab-4e30-ada0-e3ba31f15543\") " pod="kube-system/kube-proxy-sndt7" Jul 10 05:40:03.802523 kubelet[2704]: I0710 05:40:03.802522 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f0667c7-c8ab-4e30-ada0-e3ba31f15543-kube-proxy\") pod \"kube-proxy-sndt7\" (UID: \"5f0667c7-c8ab-4e30-ada0-e3ba31f15543\") " pod="kube-system/kube-proxy-sndt7" Jul 10 05:40:03.803033 kubelet[2704]: I0710 05:40:03.802550 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f0667c7-c8ab-4e30-ada0-e3ba31f15543-lib-modules\") pod \"kube-proxy-sndt7\" (UID: \"5f0667c7-c8ab-4e30-ada0-e3ba31f15543\") " pod="kube-system/kube-proxy-sndt7" Jul 10 05:40:03.803033 kubelet[2704]: I0710 05:40:03.802573 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84b5\" (UniqueName: \"kubernetes.io/projected/5f0667c7-c8ab-4e30-ada0-e3ba31f15543-kube-api-access-r84b5\") pod \"kube-proxy-sndt7\" (UID: \"5f0667c7-c8ab-4e30-ada0-e3ba31f15543\") " pod="kube-system/kube-proxy-sndt7" Jul 10 05:40:03.879182 systemd[1]: Created slice kubepods-besteffort-podfe060419_a55a_4003_99b1_83787f609e9c.slice - libcontainer container kubepods-besteffort-podfe060419_a55a_4003_99b1_83787f609e9c.slice. Jul 10 05:40:03.902790 kubelet[2704]: I0710 05:40:03.902744 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe060419-a55a-4003-99b1-83787f609e9c-var-lib-calico\") pod \"tigera-operator-747864d56d-x6ttw\" (UID: \"fe060419-a55a-4003-99b1-83787f609e9c\") " pod="tigera-operator/tigera-operator-747864d56d-x6ttw" Jul 10 05:40:03.902790 kubelet[2704]: I0710 05:40:03.902790 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k279f\" (UniqueName: \"kubernetes.io/projected/fe060419-a55a-4003-99b1-83787f609e9c-kube-api-access-k279f\") pod \"tigera-operator-747864d56d-x6ttw\" (UID: \"fe060419-a55a-4003-99b1-83787f609e9c\") " pod="tigera-operator/tigera-operator-747864d56d-x6ttw" Jul 10 05:40:04.022744 containerd[1580]: time="2025-07-10T05:40:04.022686033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sndt7,Uid:5f0667c7-c8ab-4e30-ada0-e3ba31f15543,Namespace:kube-system,Attempt:0,}" Jul 10 05:40:04.184445 containerd[1580]: time="2025-07-10T05:40:04.184374434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-x6ttw,Uid:fe060419-a55a-4003-99b1-83787f609e9c,Namespace:tigera-operator,Attempt:0,}" Jul 10 05:40:04.225631 containerd[1580]: time="2025-07-10T05:40:04.225571519Z" level=info msg="connecting to shim 61368436ba5e1ace950b8033748c21727592b7630678c9904ca5bab073560c08" address="unix:///run/containerd/s/b87053e7ed04e5ff0ba46f7e1734d043c17ab425a02b9242d8e532c8736f33f3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:04.226888 containerd[1580]: time="2025-07-10T05:40:04.226841854Z" level=info msg="connecting to shim 70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5" address="unix:///run/containerd/s/8880e87450fc31bfc627b219ab2711bf66e16753d2141510b6bc8ebfc7c025b4" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:04.255760 systemd[1]: Started cri-containerd-70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5.scope - libcontainer container 70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5. Jul 10 05:40:04.259554 systemd[1]: Started cri-containerd-61368436ba5e1ace950b8033748c21727592b7630678c9904ca5bab073560c08.scope - libcontainer container 61368436ba5e1ace950b8033748c21727592b7630678c9904ca5bab073560c08. Jul 10 05:40:04.299086 containerd[1580]: time="2025-07-10T05:40:04.298697923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sndt7,Uid:5f0667c7-c8ab-4e30-ada0-e3ba31f15543,Namespace:kube-system,Attempt:0,} returns sandbox id \"61368436ba5e1ace950b8033748c21727592b7630678c9904ca5bab073560c08\"" Jul 10 05:40:04.304655 containerd[1580]: time="2025-07-10T05:40:04.304604771Z" level=info msg="CreateContainer within sandbox \"61368436ba5e1ace950b8033748c21727592b7630678c9904ca5bab073560c08\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 05:40:04.310650 containerd[1580]: time="2025-07-10T05:40:04.310592102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-x6ttw,Uid:fe060419-a55a-4003-99b1-83787f609e9c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5\"" Jul 10 05:40:04.312114 containerd[1580]: time="2025-07-10T05:40:04.312074863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 05:40:04.322023 containerd[1580]: time="2025-07-10T05:40:04.321975976Z" level=info msg="Container 4ece06a08d4e1d7bf9b86a4edb63981499ed53fa3e99f105269a40b925a8e502: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:04.330763 containerd[1580]: time="2025-07-10T05:40:04.330709942Z" level=info msg="CreateContainer within sandbox \"61368436ba5e1ace950b8033748c21727592b7630678c9904ca5bab073560c08\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ece06a08d4e1d7bf9b86a4edb63981499ed53fa3e99f105269a40b925a8e502\"" Jul 10 05:40:04.331249 containerd[1580]: time="2025-07-10T05:40:04.331194697Z" level=info msg="StartContainer for \"4ece06a08d4e1d7bf9b86a4edb63981499ed53fa3e99f105269a40b925a8e502\"" Jul 10 05:40:04.332551 containerd[1580]: time="2025-07-10T05:40:04.332524075Z" level=info msg="connecting to shim 4ece06a08d4e1d7bf9b86a4edb63981499ed53fa3e99f105269a40b925a8e502" address="unix:///run/containerd/s/b87053e7ed04e5ff0ba46f7e1734d043c17ab425a02b9242d8e532c8736f33f3" protocol=ttrpc version=3 Jul 10 05:40:04.359613 systemd[1]: Started cri-containerd-4ece06a08d4e1d7bf9b86a4edb63981499ed53fa3e99f105269a40b925a8e502.scope - libcontainer container 4ece06a08d4e1d7bf9b86a4edb63981499ed53fa3e99f105269a40b925a8e502. Jul 10 05:40:04.410419 containerd[1580]: time="2025-07-10T05:40:04.410371764Z" level=info msg="StartContainer for \"4ece06a08d4e1d7bf9b86a4edb63981499ed53fa3e99f105269a40b925a8e502\" returns successfully" Jul 10 05:40:05.326448 kubelet[2704]: I0710 05:40:05.326206 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sndt7" podStartSLOduration=2.326188709 podStartE2EDuration="2.326188709s" podCreationTimestamp="2025-07-10 05:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:40:05.326033643 +0000 UTC m=+7.187849129" watchObservedRunningTime="2025-07-10 05:40:05.326188709 +0000 UTC m=+7.188004195" Jul 10 05:40:06.271181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822388068.mount: Deactivated successfully. Jul 10 05:40:06.875764 containerd[1580]: time="2025-07-10T05:40:06.875692415Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:06.876354 containerd[1580]: time="2025-07-10T05:40:06.876308969Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 05:40:06.877498 containerd[1580]: time="2025-07-10T05:40:06.877434774Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:06.879224 containerd[1580]: time="2025-07-10T05:40:06.879187192Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:06.879800 containerd[1580]: time="2025-07-10T05:40:06.879766365Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.567643682s" Jul 10 05:40:06.879800 containerd[1580]: time="2025-07-10T05:40:06.879797334Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 05:40:06.881697 containerd[1580]: time="2025-07-10T05:40:06.881649222Z" level=info msg="CreateContainer within sandbox \"70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 05:40:06.889562 containerd[1580]: time="2025-07-10T05:40:06.889521901Z" level=info msg="Container 7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:06.893072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265546758.mount: Deactivated successfully. Jul 10 05:40:06.895622 containerd[1580]: time="2025-07-10T05:40:06.895585533Z" level=info msg="CreateContainer within sandbox \"70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad\"" Jul 10 05:40:06.895986 containerd[1580]: time="2025-07-10T05:40:06.895956420Z" level=info msg="StartContainer for \"7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad\"" Jul 10 05:40:06.896786 containerd[1580]: time="2025-07-10T05:40:06.896731326Z" level=info msg="connecting to shim 7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad" address="unix:///run/containerd/s/8880e87450fc31bfc627b219ab2711bf66e16753d2141510b6bc8ebfc7c025b4" protocol=ttrpc version=3 Jul 10 05:40:06.941596 systemd[1]: Started cri-containerd-7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad.scope - libcontainer container 7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad. Jul 10 05:40:06.972639 containerd[1580]: time="2025-07-10T05:40:06.972594275Z" level=info msg="StartContainer for \"7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad\" returns successfully" Jul 10 05:40:07.325854 kubelet[2704]: I0710 05:40:07.325752 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-x6ttw" podStartSLOduration=1.756859719 podStartE2EDuration="4.325733515s" podCreationTimestamp="2025-07-10 05:40:03 +0000 UTC" firstStartedPulling="2025-07-10 05:40:04.31153353 +0000 UTC m=+6.173349016" lastFinishedPulling="2025-07-10 05:40:06.880407326 +0000 UTC m=+8.742222812" observedRunningTime="2025-07-10 05:40:07.325228444 +0000 UTC m=+9.187043930" watchObservedRunningTime="2025-07-10 05:40:07.325733515 +0000 UTC m=+9.187549001" Jul 10 05:40:08.907551 systemd[1]: cri-containerd-7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad.scope: Deactivated successfully. Jul 10 05:40:08.910922 containerd[1580]: time="2025-07-10T05:40:08.910709732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad\" id:\"7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad\" pid:3027 exit_status:1 exited_at:{seconds:1752126008 nanos:908172167}" Jul 10 05:40:08.910922 containerd[1580]: time="2025-07-10T05:40:08.910745140Z" level=info msg="received exit event container_id:\"7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad\" id:\"7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad\" pid:3027 exit_status:1 exited_at:{seconds:1752126008 nanos:908172167}" Jul 10 05:40:08.947438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad-rootfs.mount: Deactivated successfully. Jul 10 05:40:09.338360 kubelet[2704]: I0710 05:40:09.338309 2704 scope.go:117] "RemoveContainer" containerID="7204a0e0e7b1b43f9f8709def7f36db012163816a911395e3dd19b5e5b37f4ad" Jul 10 05:40:09.342201 containerd[1580]: time="2025-07-10T05:40:09.342155548Z" level=info msg="CreateContainer within sandbox \"70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 10 05:40:09.360162 containerd[1580]: time="2025-07-10T05:40:09.358407494Z" level=info msg="Container 7c195bc604684a0e080b2cabcb6a36d95003bc103e099c324702c90719c30d7a: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:09.370171 containerd[1580]: time="2025-07-10T05:40:09.370124055Z" level=info msg="CreateContainer within sandbox \"70ec347fad1b61c5cdeb998c2b5a6f06070d316d995a7e95489e5c7f1333b2d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7c195bc604684a0e080b2cabcb6a36d95003bc103e099c324702c90719c30d7a\"" Jul 10 05:40:09.371586 containerd[1580]: time="2025-07-10T05:40:09.370638512Z" level=info msg="StartContainer for \"7c195bc604684a0e080b2cabcb6a36d95003bc103e099c324702c90719c30d7a\"" Jul 10 05:40:09.371586 containerd[1580]: time="2025-07-10T05:40:09.371374931Z" level=info msg="connecting to shim 7c195bc604684a0e080b2cabcb6a36d95003bc103e099c324702c90719c30d7a" address="unix:///run/containerd/s/8880e87450fc31bfc627b219ab2711bf66e16753d2141510b6bc8ebfc7c025b4" protocol=ttrpc version=3 Jul 10 05:40:09.412722 systemd[1]: Started cri-containerd-7c195bc604684a0e080b2cabcb6a36d95003bc103e099c324702c90719c30d7a.scope - libcontainer container 7c195bc604684a0e080b2cabcb6a36d95003bc103e099c324702c90719c30d7a. Jul 10 05:40:09.467890 containerd[1580]: time="2025-07-10T05:40:09.467706159Z" level=info msg="StartContainer for \"7c195bc604684a0e080b2cabcb6a36d95003bc103e099c324702c90719c30d7a\" returns successfully" Jul 10 05:40:12.226334 sudo[1783]: pam_unix(sudo:session): session closed for user root Jul 10 05:40:12.227875 sshd[1782]: Connection closed by 10.0.0.1 port 59474 Jul 10 05:40:12.228566 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:12.232493 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:59474.service: Deactivated successfully. Jul 10 05:40:12.236238 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 05:40:12.236637 systemd[1]: session-7.scope: Consumed 4.548s CPU time, 227M memory peak. Jul 10 05:40:12.242013 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Jul 10 05:40:12.244089 systemd-logind[1547]: Removed session 7. Jul 10 05:40:14.357073 update_engine[1549]: I20250710 05:40:14.355541 1549 update_attempter.cc:509] Updating boot flags... Jul 10 05:40:15.195002 systemd[1]: Created slice kubepods-besteffort-pod71606d54_3348_444d_92b6_7d46cd5a0914.slice - libcontainer container kubepods-besteffort-pod71606d54_3348_444d_92b6_7d46cd5a0914.slice. Jul 10 05:40:15.278789 kubelet[2704]: I0710 05:40:15.278716 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71606d54-3348-444d-92b6-7d46cd5a0914-tigera-ca-bundle\") pod \"calico-typha-744ff4f8dc-dzw9l\" (UID: \"71606d54-3348-444d-92b6-7d46cd5a0914\") " pod="calico-system/calico-typha-744ff4f8dc-dzw9l" Jul 10 05:40:15.278789 kubelet[2704]: I0710 05:40:15.278770 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/71606d54-3348-444d-92b6-7d46cd5a0914-typha-certs\") pod \"calico-typha-744ff4f8dc-dzw9l\" (UID: \"71606d54-3348-444d-92b6-7d46cd5a0914\") " pod="calico-system/calico-typha-744ff4f8dc-dzw9l" Jul 10 05:40:15.278789 kubelet[2704]: I0710 05:40:15.278790 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2mt\" (UniqueName: \"kubernetes.io/projected/71606d54-3348-444d-92b6-7d46cd5a0914-kube-api-access-bm2mt\") pod \"calico-typha-744ff4f8dc-dzw9l\" (UID: \"71606d54-3348-444d-92b6-7d46cd5a0914\") " pod="calico-system/calico-typha-744ff4f8dc-dzw9l" Jul 10 05:40:15.504246 containerd[1580]: time="2025-07-10T05:40:15.504201628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744ff4f8dc-dzw9l,Uid:71606d54-3348-444d-92b6-7d46cd5a0914,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:15.551587 containerd[1580]: time="2025-07-10T05:40:15.551498644Z" level=info msg="connecting to shim 1d372654d2f61867db5eba6f33130fc2b4c27feeabf21cf7310e1489e51f7dc0" address="unix:///run/containerd/s/1786528203ec3eed00f33d37af53ea0373e1098dad4b748f0ada0ca03b1b716c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:15.579926 systemd[1]: Created slice kubepods-besteffort-pod0ef19eee_f221_4806_b325_4ea164b00f6d.slice - libcontainer container kubepods-besteffort-pod0ef19eee_f221_4806_b325_4ea164b00f6d.slice. Jul 10 05:40:15.580551 kubelet[2704]: I0710 05:40:15.580430 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-cni-bin-dir\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.582022 kubelet[2704]: I0710 05:40:15.581994 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkz8k\" (UniqueName: \"kubernetes.io/projected/0ef19eee-f221-4806-b325-4ea164b00f6d-kube-api-access-mkz8k\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584573 kubelet[2704]: I0710 05:40:15.582092 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-policysync\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584573 kubelet[2704]: I0710 05:40:15.582115 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-var-lib-calico\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584573 kubelet[2704]: I0710 05:40:15.582283 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-xtables-lock\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584573 kubelet[2704]: I0710 05:40:15.582352 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-lib-modules\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584573 kubelet[2704]: I0710 05:40:15.582371 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ef19eee-f221-4806-b325-4ea164b00f6d-node-certs\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584735 kubelet[2704]: I0710 05:40:15.582390 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef19eee-f221-4806-b325-4ea164b00f6d-tigera-ca-bundle\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584735 kubelet[2704]: I0710 05:40:15.582804 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-cni-log-dir\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584735 kubelet[2704]: I0710 05:40:15.582886 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-flexvol-driver-host\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584735 kubelet[2704]: I0710 05:40:15.583053 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-cni-net-dir\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.584735 kubelet[2704]: I0710 05:40:15.583119 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ef19eee-f221-4806-b325-4ea164b00f6d-var-run-calico\") pod \"calico-node-p7gwd\" (UID: \"0ef19eee-f221-4806-b325-4ea164b00f6d\") " pod="calico-system/calico-node-p7gwd" Jul 10 05:40:15.598073 systemd[1]: Started cri-containerd-1d372654d2f61867db5eba6f33130fc2b4c27feeabf21cf7310e1489e51f7dc0.scope - libcontainer container 1d372654d2f61867db5eba6f33130fc2b4c27feeabf21cf7310e1489e51f7dc0. Jul 10 05:40:15.650106 containerd[1580]: time="2025-07-10T05:40:15.650047450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744ff4f8dc-dzw9l,Uid:71606d54-3348-444d-92b6-7d46cd5a0914,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d372654d2f61867db5eba6f33130fc2b4c27feeabf21cf7310e1489e51f7dc0\"" Jul 10 05:40:15.653198 containerd[1580]: time="2025-07-10T05:40:15.652621961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 05:40:15.690903 kubelet[2704]: E0710 05:40:15.690867 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.690903 kubelet[2704]: W0710 05:40:15.690893 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.691157 kubelet[2704]: E0710 05:40:15.690945 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.698581 kubelet[2704]: E0710 05:40:15.698555 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.698581 kubelet[2704]: W0710 05:40:15.698572 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.698782 kubelet[2704]: E0710 05:40:15.698600 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.812404 kubelet[2704]: E0710 05:40:15.811913 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rq8bf" podUID="04ca591d-8202-4053-a596-a5753a64e21d" Jul 10 05:40:15.872121 kubelet[2704]: E0710 05:40:15.872041 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.872121 kubelet[2704]: W0710 05:40:15.872094 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.872121 kubelet[2704]: E0710 05:40:15.872128 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.872385 kubelet[2704]: E0710 05:40:15.872329 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.872385 kubelet[2704]: W0710 05:40:15.872338 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.872385 kubelet[2704]: E0710 05:40:15.872347 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.872696 kubelet[2704]: E0710 05:40:15.872657 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.872696 kubelet[2704]: W0710 05:40:15.872680 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.872831 kubelet[2704]: E0710 05:40:15.872701 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.873133 kubelet[2704]: E0710 05:40:15.873091 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.873133 kubelet[2704]: W0710 05:40:15.873107 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.873133 kubelet[2704]: E0710 05:40:15.873121 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.873547 kubelet[2704]: E0710 05:40:15.873500 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.873547 kubelet[2704]: W0710 05:40:15.873532 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.873675 kubelet[2704]: E0710 05:40:15.873555 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.874005 kubelet[2704]: E0710 05:40:15.873958 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.874005 kubelet[2704]: W0710 05:40:15.873985 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.874128 kubelet[2704]: E0710 05:40:15.874015 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.875341 kubelet[2704]: E0710 05:40:15.875311 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.875341 kubelet[2704]: W0710 05:40:15.875326 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.875341 kubelet[2704]: E0710 05:40:15.875335 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.875642 kubelet[2704]: E0710 05:40:15.875612 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.875642 kubelet[2704]: W0710 05:40:15.875625 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.875642 kubelet[2704]: E0710 05:40:15.875634 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.875891 kubelet[2704]: E0710 05:40:15.875861 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.875891 kubelet[2704]: W0710 05:40:15.875883 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.876008 kubelet[2704]: E0710 05:40:15.875905 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.876270 kubelet[2704]: E0710 05:40:15.876222 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.876270 kubelet[2704]: W0710 05:40:15.876252 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.876381 kubelet[2704]: E0710 05:40:15.876273 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.876654 kubelet[2704]: E0710 05:40:15.876616 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.876654 kubelet[2704]: W0710 05:40:15.876648 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.876798 kubelet[2704]: E0710 05:40:15.876661 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.877100 kubelet[2704]: E0710 05:40:15.877055 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.877100 kubelet[2704]: W0710 05:40:15.877074 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.877100 kubelet[2704]: E0710 05:40:15.877098 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.877590 kubelet[2704]: E0710 05:40:15.877401 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.877590 kubelet[2704]: W0710 05:40:15.877429 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.877590 kubelet[2704]: E0710 05:40:15.877444 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.877760 kubelet[2704]: E0710 05:40:15.877731 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.877760 kubelet[2704]: W0710 05:40:15.877754 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.877865 kubelet[2704]: E0710 05:40:15.877764 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.878885 kubelet[2704]: E0710 05:40:15.878124 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.878965 kubelet[2704]: W0710 05:40:15.878884 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.878965 kubelet[2704]: E0710 05:40:15.878937 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.879397 kubelet[2704]: E0710 05:40:15.879382 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.879397 kubelet[2704]: W0710 05:40:15.879396 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.879509 kubelet[2704]: E0710 05:40:15.879405 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.879776 kubelet[2704]: E0710 05:40:15.879705 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.879776 kubelet[2704]: W0710 05:40:15.879738 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.879776 kubelet[2704]: E0710 05:40:15.879767 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.880139 kubelet[2704]: E0710 05:40:15.880060 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.880139 kubelet[2704]: W0710 05:40:15.880070 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.880139 kubelet[2704]: E0710 05:40:15.880083 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.880360 kubelet[2704]: E0710 05:40:15.880340 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.880360 kubelet[2704]: W0710 05:40:15.880352 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.880454 kubelet[2704]: E0710 05:40:15.880379 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.880715 kubelet[2704]: E0710 05:40:15.880686 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.880715 kubelet[2704]: W0710 05:40:15.880710 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.880813 kubelet[2704]: E0710 05:40:15.880736 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.886217 containerd[1580]: time="2025-07-10T05:40:15.885919713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p7gwd,Uid:0ef19eee-f221-4806-b325-4ea164b00f6d,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:15.887201 kubelet[2704]: E0710 05:40:15.887172 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.887201 kubelet[2704]: W0710 05:40:15.887190 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.887201 kubelet[2704]: E0710 05:40:15.887202 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.887293 kubelet[2704]: I0710 05:40:15.887223 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/04ca591d-8202-4053-a596-a5753a64e21d-registration-dir\") pod \"csi-node-driver-rq8bf\" (UID: \"04ca591d-8202-4053-a596-a5753a64e21d\") " pod="calico-system/csi-node-driver-rq8bf" Jul 10 05:40:15.887401 kubelet[2704]: E0710 05:40:15.887385 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.887401 kubelet[2704]: W0710 05:40:15.887399 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.887484 kubelet[2704]: E0710 05:40:15.887410 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.887484 kubelet[2704]: I0710 05:40:15.887424 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5kl\" (UniqueName: \"kubernetes.io/projected/04ca591d-8202-4053-a596-a5753a64e21d-kube-api-access-2f5kl\") pod \"csi-node-driver-rq8bf\" (UID: \"04ca591d-8202-4053-a596-a5753a64e21d\") " pod="calico-system/csi-node-driver-rq8bf" Jul 10 05:40:15.887764 kubelet[2704]: E0710 05:40:15.887718 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.887764 kubelet[2704]: W0710 05:40:15.887759 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.888208 kubelet[2704]: E0710 05:40:15.887800 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.888208 kubelet[2704]: I0710 05:40:15.887834 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/04ca591d-8202-4053-a596-a5753a64e21d-socket-dir\") pod \"csi-node-driver-rq8bf\" (UID: \"04ca591d-8202-4053-a596-a5753a64e21d\") " pod="calico-system/csi-node-driver-rq8bf" Jul 10 05:40:15.888903 kubelet[2704]: E0710 05:40:15.888821 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.888903 kubelet[2704]: W0710 05:40:15.888854 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.889507 kubelet[2704]: E0710 05:40:15.889440 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.889687 kubelet[2704]: E0710 05:40:15.889670 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.889798 kubelet[2704]: W0710 05:40:15.889772 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.889798 kubelet[2704]: E0710 05:40:15.889796 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.890013 kubelet[2704]: E0710 05:40:15.889996 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.890013 kubelet[2704]: W0710 05:40:15.890006 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.890070 kubelet[2704]: E0710 05:40:15.890020 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.890070 kubelet[2704]: I0710 05:40:15.890051 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04ca591d-8202-4053-a596-a5753a64e21d-kubelet-dir\") pod \"csi-node-driver-rq8bf\" (UID: \"04ca591d-8202-4053-a596-a5753a64e21d\") " pod="calico-system/csi-node-driver-rq8bf" Jul 10 05:40:15.890291 kubelet[2704]: E0710 05:40:15.890243 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.890291 kubelet[2704]: W0710 05:40:15.890272 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.890291 kubelet[2704]: E0710 05:40:15.890290 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.892246 kubelet[2704]: E0710 05:40:15.892226 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.892246 kubelet[2704]: W0710 05:40:15.892240 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.892246 kubelet[2704]: E0710 05:40:15.892250 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.892510 kubelet[2704]: E0710 05:40:15.892485 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.892510 kubelet[2704]: W0710 05:40:15.892497 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.892588 kubelet[2704]: E0710 05:40:15.892517 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.892588 kubelet[2704]: I0710 05:40:15.892534 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/04ca591d-8202-4053-a596-a5753a64e21d-varrun\") pod \"csi-node-driver-rq8bf\" (UID: \"04ca591d-8202-4053-a596-a5753a64e21d\") " pod="calico-system/csi-node-driver-rq8bf" Jul 10 05:40:15.892868 kubelet[2704]: E0710 05:40:15.892831 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.892868 kubelet[2704]: W0710 05:40:15.892847 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.892868 kubelet[2704]: E0710 05:40:15.892863 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.893089 kubelet[2704]: E0710 05:40:15.893054 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.893089 kubelet[2704]: W0710 05:40:15.893075 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.893183 kubelet[2704]: E0710 05:40:15.893098 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.893488 kubelet[2704]: E0710 05:40:15.893420 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.893488 kubelet[2704]: W0710 05:40:15.893445 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.893607 kubelet[2704]: E0710 05:40:15.893501 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.893839 kubelet[2704]: E0710 05:40:15.893819 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.893839 kubelet[2704]: W0710 05:40:15.893832 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.893839 kubelet[2704]: E0710 05:40:15.893842 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.894124 kubelet[2704]: E0710 05:40:15.894090 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.894196 kubelet[2704]: W0710 05:40:15.894125 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.894196 kubelet[2704]: E0710 05:40:15.894137 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.896008 kubelet[2704]: E0710 05:40:15.895562 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.896008 kubelet[2704]: W0710 05:40:15.895582 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.896008 kubelet[2704]: E0710 05:40:15.895592 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.940904 containerd[1580]: time="2025-07-10T05:40:15.940304079Z" level=info msg="connecting to shim 521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940" address="unix:///run/containerd/s/cc11e7b00039f4bd63011532dc882e868f122d7e95d98c051a4dbee4b6c0f2b7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:15.984087 systemd[1]: Started cri-containerd-521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940.scope - libcontainer container 521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940. Jul 10 05:40:15.994667 kubelet[2704]: E0710 05:40:15.994617 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.994667 kubelet[2704]: W0710 05:40:15.994646 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.994667 kubelet[2704]: E0710 05:40:15.994669 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.995228 kubelet[2704]: E0710 05:40:15.994962 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.995228 kubelet[2704]: W0710 05:40:15.994977 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.995228 kubelet[2704]: E0710 05:40:15.995002 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.995309 kubelet[2704]: E0710 05:40:15.995282 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.995309 kubelet[2704]: W0710 05:40:15.995294 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.995351 kubelet[2704]: E0710 05:40:15.995311 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.995622 kubelet[2704]: E0710 05:40:15.995583 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.995622 kubelet[2704]: W0710 05:40:15.995599 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.995622 kubelet[2704]: E0710 05:40:15.995615 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.995857 kubelet[2704]: E0710 05:40:15.995833 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.995857 kubelet[2704]: W0710 05:40:15.995849 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.995914 kubelet[2704]: E0710 05:40:15.995865 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.996107 kubelet[2704]: E0710 05:40:15.996084 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.996107 kubelet[2704]: W0710 05:40:15.996099 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.996221 kubelet[2704]: E0710 05:40:15.996184 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.996399 kubelet[2704]: E0710 05:40:15.996376 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.996399 kubelet[2704]: W0710 05:40:15.996391 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.996453 kubelet[2704]: E0710 05:40:15.996429 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.996687 kubelet[2704]: E0710 05:40:15.996653 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.996687 kubelet[2704]: W0710 05:40:15.996669 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.996759 kubelet[2704]: E0710 05:40:15.996701 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.997044 kubelet[2704]: E0710 05:40:15.997020 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.997044 kubelet[2704]: W0710 05:40:15.997036 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.997093 kubelet[2704]: E0710 05:40:15.997066 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.997508 kubelet[2704]: E0710 05:40:15.997485 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.997508 kubelet[2704]: W0710 05:40:15.997501 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.998022 kubelet[2704]: E0710 05:40:15.997997 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.998245 kubelet[2704]: E0710 05:40:15.998215 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.998245 kubelet[2704]: W0710 05:40:15.998231 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.998366 kubelet[2704]: E0710 05:40:15.998340 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.998631 kubelet[2704]: E0710 05:40:15.998597 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.998631 kubelet[2704]: W0710 05:40:15.998614 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.998720 kubelet[2704]: E0710 05:40:15.998669 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.998870 kubelet[2704]: E0710 05:40:15.998845 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.998870 kubelet[2704]: W0710 05:40:15.998860 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.998963 kubelet[2704]: E0710 05:40:15.998941 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.999045 kubelet[2704]: E0710 05:40:15.999024 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.999045 kubelet[2704]: W0710 05:40:15.999038 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.999113 kubelet[2704]: E0710 05:40:15.999088 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.999314 kubelet[2704]: E0710 05:40:15.999258 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.999314 kubelet[2704]: W0710 05:40:15.999270 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.999369 kubelet[2704]: E0710 05:40:15.999318 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:15.999574 kubelet[2704]: E0710 05:40:15.999523 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:15.999574 kubelet[2704]: W0710 05:40:15.999536 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:15.999574 kubelet[2704]: E0710 05:40:15.999556 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.000012 kubelet[2704]: E0710 05:40:15.999924 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.000012 kubelet[2704]: W0710 05:40:15.999937 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.000012 kubelet[2704]: E0710 05:40:15.999950 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.000234 kubelet[2704]: E0710 05:40:16.000211 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.000234 kubelet[2704]: W0710 05:40:16.000226 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.000305 kubelet[2704]: E0710 05:40:16.000270 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.000482 kubelet[2704]: E0710 05:40:16.000443 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.000482 kubelet[2704]: W0710 05:40:16.000455 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.000546 kubelet[2704]: E0710 05:40:16.000524 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.000745 kubelet[2704]: E0710 05:40:16.000704 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.000745 kubelet[2704]: W0710 05:40:16.000717 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.000843 kubelet[2704]: E0710 05:40:16.000771 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.000965 kubelet[2704]: E0710 05:40:16.000944 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.000965 kubelet[2704]: W0710 05:40:16.000953 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.001054 kubelet[2704]: E0710 05:40:16.001032 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.001270 kubelet[2704]: E0710 05:40:16.001231 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.001270 kubelet[2704]: W0710 05:40:16.001247 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.001422 kubelet[2704]: E0710 05:40:16.001326 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.001551 kubelet[2704]: E0710 05:40:16.001529 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.001551 kubelet[2704]: W0710 05:40:16.001544 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.001721 kubelet[2704]: E0710 05:40:16.001557 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.001765 kubelet[2704]: E0710 05:40:16.001740 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.001765 kubelet[2704]: W0710 05:40:16.001758 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.001820 kubelet[2704]: E0710 05:40:16.001779 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.002020 kubelet[2704]: E0710 05:40:16.001991 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.002020 kubelet[2704]: W0710 05:40:16.002007 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.002020 kubelet[2704]: E0710 05:40:16.002015 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.011132 kubelet[2704]: E0710 05:40:16.011062 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:16.011132 kubelet[2704]: W0710 05:40:16.011079 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:16.011132 kubelet[2704]: E0710 05:40:16.011089 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:16.027394 containerd[1580]: time="2025-07-10T05:40:16.027304568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p7gwd,Uid:0ef19eee-f221-4806-b325-4ea164b00f6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940\"" Jul 10 05:40:17.279096 kubelet[2704]: E0710 05:40:17.278975 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rq8bf" podUID="04ca591d-8202-4053-a596-a5753a64e21d" Jul 10 05:40:17.641778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011891364.mount: Deactivated successfully. Jul 10 05:40:18.905208 containerd[1580]: time="2025-07-10T05:40:18.905149936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:18.905935 containerd[1580]: time="2025-07-10T05:40:18.905906535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 05:40:18.907223 containerd[1580]: time="2025-07-10T05:40:18.907190060Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:18.909479 containerd[1580]: time="2025-07-10T05:40:18.909413691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:18.909977 containerd[1580]: time="2025-07-10T05:40:18.909925699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.257229626s" Jul 10 05:40:18.910010 containerd[1580]: time="2025-07-10T05:40:18.909975171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 05:40:18.913134 containerd[1580]: time="2025-07-10T05:40:18.913103180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 05:40:18.923333 containerd[1580]: time="2025-07-10T05:40:18.923280817Z" level=info msg="CreateContainer within sandbox \"1d372654d2f61867db5eba6f33130fc2b4c27feeabf21cf7310e1489e51f7dc0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 05:40:18.932674 containerd[1580]: time="2025-07-10T05:40:18.932626352Z" level=info msg="Container 2e47f624024dadfd792c51b4d411fa2ca6cb32bdb3326e3d56f4da1e259d11df: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:19.029353 containerd[1580]: time="2025-07-10T05:40:19.029248637Z" level=info msg="CreateContainer within sandbox \"1d372654d2f61867db5eba6f33130fc2b4c27feeabf21cf7310e1489e51f7dc0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2e47f624024dadfd792c51b4d411fa2ca6cb32bdb3326e3d56f4da1e259d11df\"" Jul 10 05:40:19.033547 containerd[1580]: time="2025-07-10T05:40:19.031202786Z" level=info msg="StartContainer for \"2e47f624024dadfd792c51b4d411fa2ca6cb32bdb3326e3d56f4da1e259d11df\"" Jul 10 05:40:19.035635 containerd[1580]: time="2025-07-10T05:40:19.035545206Z" level=info msg="connecting to shim 2e47f624024dadfd792c51b4d411fa2ca6cb32bdb3326e3d56f4da1e259d11df" address="unix:///run/containerd/s/1786528203ec3eed00f33d37af53ea0373e1098dad4b748f0ada0ca03b1b716c" protocol=ttrpc version=3 Jul 10 05:40:19.071647 systemd[1]: Started cri-containerd-2e47f624024dadfd792c51b4d411fa2ca6cb32bdb3326e3d56f4da1e259d11df.scope - libcontainer container 2e47f624024dadfd792c51b4d411fa2ca6cb32bdb3326e3d56f4da1e259d11df. Jul 10 05:40:19.167792 containerd[1580]: time="2025-07-10T05:40:19.167635292Z" level=info msg="StartContainer for \"2e47f624024dadfd792c51b4d411fa2ca6cb32bdb3326e3d56f4da1e259d11df\" returns successfully" Jul 10 05:40:19.279158 kubelet[2704]: E0710 05:40:19.279073 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rq8bf" podUID="04ca591d-8202-4053-a596-a5753a64e21d" Jul 10 05:40:19.402814 kubelet[2704]: E0710 05:40:19.402757 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.402814 kubelet[2704]: W0710 05:40:19.402792 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.402814 kubelet[2704]: E0710 05:40:19.402822 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.403079 kubelet[2704]: E0710 05:40:19.403057 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.403079 kubelet[2704]: W0710 05:40:19.403069 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.403079 kubelet[2704]: E0710 05:40:19.403077 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.403270 kubelet[2704]: E0710 05:40:19.403250 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.403270 kubelet[2704]: W0710 05:40:19.403261 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.403270 kubelet[2704]: E0710 05:40:19.403269 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.403520 kubelet[2704]: E0710 05:40:19.403501 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.403520 kubelet[2704]: W0710 05:40:19.403513 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.403577 kubelet[2704]: E0710 05:40:19.403521 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.403732 kubelet[2704]: E0710 05:40:19.403704 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.403762 kubelet[2704]: W0710 05:40:19.403742 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.403762 kubelet[2704]: E0710 05:40:19.403753 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.403937 kubelet[2704]: E0710 05:40:19.403919 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.403937 kubelet[2704]: W0710 05:40:19.403929 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.403937 kubelet[2704]: E0710 05:40:19.403938 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.404102 kubelet[2704]: E0710 05:40:19.404085 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.404102 kubelet[2704]: W0710 05:40:19.404095 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.404102 kubelet[2704]: E0710 05:40:19.404102 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.404283 kubelet[2704]: E0710 05:40:19.404266 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.404283 kubelet[2704]: W0710 05:40:19.404275 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.404283 kubelet[2704]: E0710 05:40:19.404283 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.404506 kubelet[2704]: E0710 05:40:19.404459 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.404506 kubelet[2704]: W0710 05:40:19.404489 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.404506 kubelet[2704]: E0710 05:40:19.404496 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.404670 kubelet[2704]: E0710 05:40:19.404653 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.404670 kubelet[2704]: W0710 05:40:19.404663 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.404670 kubelet[2704]: E0710 05:40:19.404671 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.404863 kubelet[2704]: E0710 05:40:19.404845 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.404863 kubelet[2704]: W0710 05:40:19.404855 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.404915 kubelet[2704]: E0710 05:40:19.404865 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.405051 kubelet[2704]: E0710 05:40:19.405033 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.405051 kubelet[2704]: W0710 05:40:19.405043 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.405103 kubelet[2704]: E0710 05:40:19.405051 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.405229 kubelet[2704]: E0710 05:40:19.405211 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.405229 kubelet[2704]: W0710 05:40:19.405221 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.405229 kubelet[2704]: E0710 05:40:19.405229 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.405393 kubelet[2704]: E0710 05:40:19.405376 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.405393 kubelet[2704]: W0710 05:40:19.405386 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.405393 kubelet[2704]: E0710 05:40:19.405392 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.405634 kubelet[2704]: E0710 05:40:19.405609 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.405634 kubelet[2704]: W0710 05:40:19.405625 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.405634 kubelet[2704]: E0710 05:40:19.405633 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.419297 kubelet[2704]: E0710 05:40:19.419162 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.419297 kubelet[2704]: W0710 05:40:19.419195 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.419297 kubelet[2704]: E0710 05:40:19.419220 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.419672 kubelet[2704]: E0710 05:40:19.419498 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.419672 kubelet[2704]: W0710 05:40:19.419514 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.419672 kubelet[2704]: E0710 05:40:19.419525 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.419943 kubelet[2704]: E0710 05:40:19.419927 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.420007 kubelet[2704]: W0710 05:40:19.419993 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.420062 kubelet[2704]: E0710 05:40:19.420050 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.420361 kubelet[2704]: E0710 05:40:19.420337 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.420361 kubelet[2704]: W0710 05:40:19.420358 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.420439 kubelet[2704]: E0710 05:40:19.420382 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.420746 kubelet[2704]: E0710 05:40:19.420704 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.420746 kubelet[2704]: W0710 05:40:19.420742 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.420874 kubelet[2704]: E0710 05:40:19.420854 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.421001 kubelet[2704]: E0710 05:40:19.420981 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.421001 kubelet[2704]: W0710 05:40:19.420997 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.421303 kubelet[2704]: E0710 05:40:19.421065 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.421303 kubelet[2704]: E0710 05:40:19.421229 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.421303 kubelet[2704]: W0710 05:40:19.421240 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.421303 kubelet[2704]: E0710 05:40:19.421253 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.421445 kubelet[2704]: E0710 05:40:19.421429 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.421445 kubelet[2704]: W0710 05:40:19.421439 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.421525 kubelet[2704]: E0710 05:40:19.421454 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.421688 kubelet[2704]: E0710 05:40:19.421663 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.421688 kubelet[2704]: W0710 05:40:19.421675 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.421688 kubelet[2704]: E0710 05:40:19.421688 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.423499 kubelet[2704]: E0710 05:40:19.422301 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.423499 kubelet[2704]: W0710 05:40:19.422324 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.423499 kubelet[2704]: E0710 05:40:19.422342 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.423499 kubelet[2704]: E0710 05:40:19.422557 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.423499 kubelet[2704]: W0710 05:40:19.422566 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.423499 kubelet[2704]: E0710 05:40:19.422613 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.423659 kubelet[2704]: E0710 05:40:19.423619 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.423659 kubelet[2704]: W0710 05:40:19.423629 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.423659 kubelet[2704]: E0710 05:40:19.423639 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.425645 kubelet[2704]: E0710 05:40:19.425622 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.425645 kubelet[2704]: W0710 05:40:19.425640 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.425801 kubelet[2704]: E0710 05:40:19.425762 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.426206 kubelet[2704]: E0710 05:40:19.426186 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.426206 kubelet[2704]: W0710 05:40:19.426202 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.426272 kubelet[2704]: E0710 05:40:19.426222 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.426430 kubelet[2704]: E0710 05:40:19.426415 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.426430 kubelet[2704]: W0710 05:40:19.426426 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.426513 kubelet[2704]: E0710 05:40:19.426435 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.426947 kubelet[2704]: E0710 05:40:19.426917 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.426947 kubelet[2704]: W0710 05:40:19.426933 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.426947 kubelet[2704]: E0710 05:40:19.426944 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.427196 kubelet[2704]: E0710 05:40:19.427166 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.427196 kubelet[2704]: W0710 05:40:19.427180 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.427196 kubelet[2704]: E0710 05:40:19.427188 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:19.427544 kubelet[2704]: E0710 05:40:19.427527 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:19.427544 kubelet[2704]: W0710 05:40:19.427538 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:19.427624 kubelet[2704]: E0710 05:40:19.427547 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.371201 kubelet[2704]: I0710 05:40:20.371143 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 05:40:20.412064 kubelet[2704]: E0710 05:40:20.412017 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.412064 kubelet[2704]: W0710 05:40:20.412046 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.412211 kubelet[2704]: E0710 05:40:20.412073 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.412318 kubelet[2704]: E0710 05:40:20.412302 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.412318 kubelet[2704]: W0710 05:40:20.412314 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.412377 kubelet[2704]: E0710 05:40:20.412323 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.412525 kubelet[2704]: E0710 05:40:20.412508 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.412525 kubelet[2704]: W0710 05:40:20.412520 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.412586 kubelet[2704]: E0710 05:40:20.412528 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.412775 kubelet[2704]: E0710 05:40:20.412755 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.412775 kubelet[2704]: W0710 05:40:20.412767 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.412775 kubelet[2704]: E0710 05:40:20.412776 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.412952 kubelet[2704]: E0710 05:40:20.412937 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.412952 kubelet[2704]: W0710 05:40:20.412948 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.413008 kubelet[2704]: E0710 05:40:20.412955 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.413114 kubelet[2704]: E0710 05:40:20.413098 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.413114 kubelet[2704]: W0710 05:40:20.413108 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.413179 kubelet[2704]: E0710 05:40:20.413117 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.413292 kubelet[2704]: E0710 05:40:20.413273 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.413292 kubelet[2704]: W0710 05:40:20.413285 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.413292 kubelet[2704]: E0710 05:40:20.413292 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.413484 kubelet[2704]: E0710 05:40:20.413435 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.413484 kubelet[2704]: W0710 05:40:20.413447 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.413484 kubelet[2704]: E0710 05:40:20.413457 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.413786 kubelet[2704]: E0710 05:40:20.413677 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.413786 kubelet[2704]: W0710 05:40:20.413685 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.413786 kubelet[2704]: E0710 05:40:20.413693 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.413882 kubelet[2704]: E0710 05:40:20.413872 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.413882 kubelet[2704]: W0710 05:40:20.413880 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.413958 kubelet[2704]: E0710 05:40:20.413888 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.414095 kubelet[2704]: E0710 05:40:20.414075 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.414095 kubelet[2704]: W0710 05:40:20.414084 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.414095 kubelet[2704]: E0710 05:40:20.414092 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.414282 kubelet[2704]: E0710 05:40:20.414266 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.414282 kubelet[2704]: W0710 05:40:20.414275 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.414282 kubelet[2704]: E0710 05:40:20.414285 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.414543 kubelet[2704]: E0710 05:40:20.414457 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.414543 kubelet[2704]: W0710 05:40:20.414486 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.414543 kubelet[2704]: E0710 05:40:20.414494 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.414835 kubelet[2704]: E0710 05:40:20.414797 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.414835 kubelet[2704]: W0710 05:40:20.414807 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.414835 kubelet[2704]: E0710 05:40:20.414816 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.415052 kubelet[2704]: E0710 05:40:20.415020 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.415052 kubelet[2704]: W0710 05:40:20.415032 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.415052 kubelet[2704]: E0710 05:40:20.415041 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.427751 kubelet[2704]: E0710 05:40:20.427703 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.427751 kubelet[2704]: W0710 05:40:20.427747 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.427876 kubelet[2704]: E0710 05:40:20.427780 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.428717 kubelet[2704]: E0710 05:40:20.428688 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.428717 kubelet[2704]: W0710 05:40:20.428702 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.429037 kubelet[2704]: E0710 05:40:20.428735 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.429295 kubelet[2704]: E0710 05:40:20.429229 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.429295 kubelet[2704]: W0710 05:40:20.429266 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.429390 kubelet[2704]: E0710 05:40:20.429297 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.429676 kubelet[2704]: E0710 05:40:20.429653 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.429676 kubelet[2704]: W0710 05:40:20.429672 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.429792 kubelet[2704]: E0710 05:40:20.429692 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.429991 kubelet[2704]: E0710 05:40:20.429962 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.429991 kubelet[2704]: W0710 05:40:20.429978 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.430054 kubelet[2704]: E0710 05:40:20.429992 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.430254 kubelet[2704]: E0710 05:40:20.430234 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.430254 kubelet[2704]: W0710 05:40:20.430251 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.430429 kubelet[2704]: E0710 05:40:20.430387 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.430698 kubelet[2704]: E0710 05:40:20.430549 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.430698 kubelet[2704]: W0710 05:40:20.430561 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.430698 kubelet[2704]: E0710 05:40:20.430604 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.431272 kubelet[2704]: E0710 05:40:20.431088 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.431272 kubelet[2704]: W0710 05:40:20.431129 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.431272 kubelet[2704]: E0710 05:40:20.431155 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.431484 kubelet[2704]: E0710 05:40:20.431434 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.431484 kubelet[2704]: W0710 05:40:20.431450 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.431612 kubelet[2704]: E0710 05:40:20.431489 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.431772 kubelet[2704]: E0710 05:40:20.431739 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.431772 kubelet[2704]: W0710 05:40:20.431758 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.431833 kubelet[2704]: E0710 05:40:20.431778 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.432025 kubelet[2704]: E0710 05:40:20.432001 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.432025 kubelet[2704]: W0710 05:40:20.432019 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.432099 kubelet[2704]: E0710 05:40:20.432043 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.432307 kubelet[2704]: E0710 05:40:20.432284 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.432307 kubelet[2704]: W0710 05:40:20.432302 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.432375 kubelet[2704]: E0710 05:40:20.432321 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.432653 kubelet[2704]: E0710 05:40:20.432630 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.432653 kubelet[2704]: W0710 05:40:20.432646 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.432759 kubelet[2704]: E0710 05:40:20.432665 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.432913 kubelet[2704]: E0710 05:40:20.432887 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.432913 kubelet[2704]: W0710 05:40:20.432901 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.432913 kubelet[2704]: E0710 05:40:20.432918 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.433167 kubelet[2704]: E0710 05:40:20.433139 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.433167 kubelet[2704]: W0710 05:40:20.433158 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.433248 kubelet[2704]: E0710 05:40:20.433179 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.433449 kubelet[2704]: E0710 05:40:20.433429 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.433449 kubelet[2704]: W0710 05:40:20.433442 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.433561 kubelet[2704]: E0710 05:40:20.433457 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.433793 kubelet[2704]: E0710 05:40:20.433768 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.433793 kubelet[2704]: W0710 05:40:20.433782 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.433793 kubelet[2704]: E0710 05:40:20.433797 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:20.434005 kubelet[2704]: E0710 05:40:20.433982 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 05:40:20.434005 kubelet[2704]: W0710 05:40:20.433994 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 05:40:20.434005 kubelet[2704]: E0710 05:40:20.434002 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 05:40:21.206545 containerd[1580]: time="2025-07-10T05:40:21.206458033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:21.207254 containerd[1580]: time="2025-07-10T05:40:21.207232204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 05:40:21.208409 containerd[1580]: time="2025-07-10T05:40:21.208349071Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:21.210282 containerd[1580]: time="2025-07-10T05:40:21.210255898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:21.210884 containerd[1580]: time="2025-07-10T05:40:21.210859507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.297644806s" Jul 10 05:40:21.210945 containerd[1580]: time="2025-07-10T05:40:21.210887309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 05:40:21.213183 containerd[1580]: time="2025-07-10T05:40:21.213142084Z" level=info msg="CreateContainer within sandbox \"521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 05:40:21.223168 containerd[1580]: time="2025-07-10T05:40:21.223095022Z" level=info msg="Container 57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:21.233736 containerd[1580]: time="2025-07-10T05:40:21.233655445Z" level=info msg="CreateContainer within sandbox \"521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80\"" Jul 10 05:40:21.234224 containerd[1580]: time="2025-07-10T05:40:21.234181127Z" level=info msg="StartContainer for \"57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80\"" Jul 10 05:40:21.235626 containerd[1580]: time="2025-07-10T05:40:21.235594273Z" level=info msg="connecting to shim 57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80" address="unix:///run/containerd/s/cc11e7b00039f4bd63011532dc882e868f122d7e95d98c051a4dbee4b6c0f2b7" protocol=ttrpc version=3 Jul 10 05:40:21.264762 systemd[1]: Started cri-containerd-57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80.scope - libcontainer container 57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80. Jul 10 05:40:21.279368 kubelet[2704]: E0710 05:40:21.279297 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rq8bf" podUID="04ca591d-8202-4053-a596-a5753a64e21d" Jul 10 05:40:21.325746 systemd[1]: cri-containerd-57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80.scope: Deactivated successfully. Jul 10 05:40:21.329071 containerd[1580]: time="2025-07-10T05:40:21.329031899Z" level=info msg="StartContainer for \"57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80\" returns successfully" Jul 10 05:40:21.330125 containerd[1580]: time="2025-07-10T05:40:21.330091248Z" level=info msg="received exit event container_id:\"57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80\" id:\"57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80\" pid:3487 exited_at:{seconds:1752126021 nanos:329864289}" Jul 10 05:40:21.331609 containerd[1580]: time="2025-07-10T05:40:21.331525413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80\" id:\"57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80\" pid:3487 exited_at:{seconds:1752126021 nanos:329864289}" Jul 10 05:40:21.354735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57d218b9e65e34d760694d4fc984251cf2dbae2d96309eb5bdccae6c91f0ac80-rootfs.mount: Deactivated successfully. Jul 10 05:40:21.379500 containerd[1580]: time="2025-07-10T05:40:21.379427208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 05:40:21.394025 kubelet[2704]: I0710 05:40:21.392952 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-744ff4f8dc-dzw9l" podStartSLOduration=3.132477576 podStartE2EDuration="6.3929337s" podCreationTimestamp="2025-07-10 05:40:15 +0000 UTC" firstStartedPulling="2025-07-10 05:40:15.652177029 +0000 UTC m=+17.513992515" lastFinishedPulling="2025-07-10 05:40:18.912633153 +0000 UTC m=+20.774448639" observedRunningTime="2025-07-10 05:40:19.384901548 +0000 UTC m=+21.246717034" watchObservedRunningTime="2025-07-10 05:40:21.3929337 +0000 UTC m=+23.254749186" Jul 10 05:40:23.279168 kubelet[2704]: E0710 05:40:23.279092 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rq8bf" podUID="04ca591d-8202-4053-a596-a5753a64e21d" Jul 10 05:40:25.210796 containerd[1580]: time="2025-07-10T05:40:25.210737501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:25.211580 containerd[1580]: time="2025-07-10T05:40:25.211555832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 05:40:25.213055 containerd[1580]: time="2025-07-10T05:40:25.213009512Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:25.215484 containerd[1580]: time="2025-07-10T05:40:25.215425443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:25.216033 containerd[1580]: time="2025-07-10T05:40:25.215985258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.836178414s" Jul 10 05:40:25.216033 containerd[1580]: time="2025-07-10T05:40:25.216013631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 05:40:25.219195 containerd[1580]: time="2025-07-10T05:40:25.219164768Z" level=info msg="CreateContainer within sandbox \"521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 05:40:25.228269 containerd[1580]: time="2025-07-10T05:40:25.228217926Z" level=info msg="Container c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:25.237744 containerd[1580]: time="2025-07-10T05:40:25.237705070Z" level=info msg="CreateContainer within sandbox \"521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669\"" Jul 10 05:40:25.238279 containerd[1580]: time="2025-07-10T05:40:25.238248413Z" level=info msg="StartContainer for \"c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669\"" Jul 10 05:40:25.239698 containerd[1580]: time="2025-07-10T05:40:25.239669391Z" level=info msg="connecting to shim c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669" address="unix:///run/containerd/s/cc11e7b00039f4bd63011532dc882e868f122d7e95d98c051a4dbee4b6c0f2b7" protocol=ttrpc version=3 Jul 10 05:40:25.262644 systemd[1]: Started cri-containerd-c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669.scope - libcontainer container c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669. Jul 10 05:40:25.278699 kubelet[2704]: E0710 05:40:25.278621 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rq8bf" podUID="04ca591d-8202-4053-a596-a5753a64e21d" Jul 10 05:40:25.310555 containerd[1580]: time="2025-07-10T05:40:25.310504050Z" level=info msg="StartContainer for \"c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669\" returns successfully" Jul 10 05:40:26.449556 systemd[1]: cri-containerd-c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669.scope: Deactivated successfully. Jul 10 05:40:26.450144 systemd[1]: cri-containerd-c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669.scope: Consumed 610ms CPU time, 177M memory peak, 2M read from disk, 171.2M written to disk. Jul 10 05:40:26.450588 containerd[1580]: time="2025-07-10T05:40:26.450451169Z" level=info msg="received exit event container_id:\"c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669\" id:\"c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669\" pid:3546 exited_at:{seconds:1752126026 nanos:450165150}" Jul 10 05:40:26.451362 containerd[1580]: time="2025-07-10T05:40:26.451316368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669\" id:\"c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669\" pid:3546 exited_at:{seconds:1752126026 nanos:450165150}" Jul 10 05:40:26.474455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6a6e7d0f1d2d8ae0f7a6d218cda95ce320aa30d60ba26b7bf35ead515d47669-rootfs.mount: Deactivated successfully. Jul 10 05:40:26.499437 kubelet[2704]: I0710 05:40:26.499386 2704 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 05:40:26.626891 systemd[1]: Created slice kubepods-burstable-pod7b0f0fc2_f807_4f80_a78f_2270dc4cd424.slice - libcontainer container kubepods-burstable-pod7b0f0fc2_f807_4f80_a78f_2270dc4cd424.slice. Jul 10 05:40:26.663576 systemd[1]: Created slice kubepods-besteffort-pod8f75337f_497e_4574_bcdc_82bd81b25109.slice - libcontainer container kubepods-besteffort-pod8f75337f_497e_4574_bcdc_82bd81b25109.slice. Jul 10 05:40:26.670589 systemd[1]: Created slice kubepods-burstable-pod14665b78_b91b_4ca7_a8cb_a324d20217da.slice - libcontainer container kubepods-burstable-pod14665b78_b91b_4ca7_a8cb_a324d20217da.slice. Jul 10 05:40:26.674338 systemd[1]: Created slice kubepods-besteffort-podff545102_cf9a_4e56_9635_710d478066a0.slice - libcontainer container kubepods-besteffort-podff545102_cf9a_4e56_9635_710d478066a0.slice. Jul 10 05:40:26.675708 kubelet[2704]: I0710 05:40:26.675665 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhgr\" (UniqueName: \"kubernetes.io/projected/7b0f0fc2-f807-4f80-a78f-2270dc4cd424-kube-api-access-6lhgr\") pod \"coredns-668d6bf9bc-rw944\" (UID: \"7b0f0fc2-f807-4f80-a78f-2270dc4cd424\") " pod="kube-system/coredns-668d6bf9bc-rw944" Jul 10 05:40:26.675804 kubelet[2704]: I0710 05:40:26.675716 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f75337f-497e-4574-bcdc-82bd81b25109-tigera-ca-bundle\") pod \"calico-kube-controllers-bbf7667d-575nv\" (UID: \"8f75337f-497e-4574-bcdc-82bd81b25109\") " pod="calico-system/calico-kube-controllers-bbf7667d-575nv" Jul 10 05:40:26.675804 kubelet[2704]: I0710 05:40:26.675737 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b0f0fc2-f807-4f80-a78f-2270dc4cd424-config-volume\") pod \"coredns-668d6bf9bc-rw944\" (UID: \"7b0f0fc2-f807-4f80-a78f-2270dc4cd424\") " pod="kube-system/coredns-668d6bf9bc-rw944" Jul 10 05:40:26.675804 kubelet[2704]: I0710 05:40:26.675764 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9z2b\" (UniqueName: \"kubernetes.io/projected/8f75337f-497e-4574-bcdc-82bd81b25109-kube-api-access-z9z2b\") pod \"calico-kube-controllers-bbf7667d-575nv\" (UID: \"8f75337f-497e-4574-bcdc-82bd81b25109\") " pod="calico-system/calico-kube-controllers-bbf7667d-575nv" Jul 10 05:40:26.679589 systemd[1]: Created slice kubepods-besteffort-pod728fba3f_2ffd_498a_a284_400bc31893bf.slice - libcontainer container kubepods-besteffort-pod728fba3f_2ffd_498a_a284_400bc31893bf.slice. Jul 10 05:40:26.684415 systemd[1]: Created slice kubepods-besteffort-podbd991b85_9626_41bd_9812_76925fc7726a.slice - libcontainer container kubepods-besteffort-podbd991b85_9626_41bd_9812_76925fc7726a.slice. Jul 10 05:40:26.688740 systemd[1]: Created slice kubepods-besteffort-pod5231cdcb_289f_4d51_93da_87c1be371766.slice - libcontainer container kubepods-besteffort-pod5231cdcb_289f_4d51_93da_87c1be371766.slice. Jul 10 05:40:26.777533 kubelet[2704]: I0710 05:40:26.776655 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zmz2\" (UniqueName: \"kubernetes.io/projected/ff545102-cf9a-4e56-9635-710d478066a0-kube-api-access-8zmz2\") pod \"calico-apiserver-7cdc6c9c96-gzmm7\" (UID: \"ff545102-cf9a-4e56-9635-710d478066a0\") " pod="calico-apiserver/calico-apiserver-7cdc6c9c96-gzmm7" Jul 10 05:40:26.777533 kubelet[2704]: I0710 05:40:26.776737 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd991b85-9626-41bd-9812-76925fc7726a-config\") pod \"goldmane-768f4c5c69-4rvm9\" (UID: \"bd991b85-9626-41bd-9812-76925fc7726a\") " pod="calico-system/goldmane-768f4c5c69-4rvm9" Jul 10 05:40:26.777533 kubelet[2704]: I0710 05:40:26.776757 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgpd\" (UniqueName: \"kubernetes.io/projected/bd991b85-9626-41bd-9812-76925fc7726a-kube-api-access-9zgpd\") pod \"goldmane-768f4c5c69-4rvm9\" (UID: \"bd991b85-9626-41bd-9812-76925fc7726a\") " pod="calico-system/goldmane-768f4c5c69-4rvm9" Jul 10 05:40:26.777533 kubelet[2704]: I0710 05:40:26.776777 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5231cdcb-289f-4d51-93da-87c1be371766-whisker-ca-bundle\") pod \"whisker-59989b4778-vds2z\" (UID: \"5231cdcb-289f-4d51-93da-87c1be371766\") " pod="calico-system/whisker-59989b4778-vds2z" Jul 10 05:40:26.777533 kubelet[2704]: I0710 05:40:26.776793 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ddr\" (UniqueName: \"kubernetes.io/projected/728fba3f-2ffd-498a-a284-400bc31893bf-kube-api-access-26ddr\") pod \"calico-apiserver-7cdc6c9c96-8wm6d\" (UID: \"728fba3f-2ffd-498a-a284-400bc31893bf\") " pod="calico-apiserver/calico-apiserver-7cdc6c9c96-8wm6d" Jul 10 05:40:26.777804 kubelet[2704]: I0710 05:40:26.776821 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd991b85-9626-41bd-9812-76925fc7726a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-4rvm9\" (UID: \"bd991b85-9626-41bd-9812-76925fc7726a\") " pod="calico-system/goldmane-768f4c5c69-4rvm9" Jul 10 05:40:26.777804 kubelet[2704]: I0710 05:40:26.776838 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/728fba3f-2ffd-498a-a284-400bc31893bf-calico-apiserver-certs\") pod \"calico-apiserver-7cdc6c9c96-8wm6d\" (UID: \"728fba3f-2ffd-498a-a284-400bc31893bf\") " pod="calico-apiserver/calico-apiserver-7cdc6c9c96-8wm6d" Jul 10 05:40:26.777804 kubelet[2704]: I0710 05:40:26.776852 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bd991b85-9626-41bd-9812-76925fc7726a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-4rvm9\" (UID: \"bd991b85-9626-41bd-9812-76925fc7726a\") " pod="calico-system/goldmane-768f4c5c69-4rvm9" Jul 10 05:40:26.777804 kubelet[2704]: I0710 05:40:26.776868 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd97p\" (UniqueName: \"kubernetes.io/projected/14665b78-b91b-4ca7-a8cb-a324d20217da-kube-api-access-pd97p\") pod \"coredns-668d6bf9bc-r2mz9\" (UID: \"14665b78-b91b-4ca7-a8cb-a324d20217da\") " pod="kube-system/coredns-668d6bf9bc-r2mz9" Jul 10 05:40:26.777804 kubelet[2704]: I0710 05:40:26.776885 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lj2\" (UniqueName: \"kubernetes.io/projected/5231cdcb-289f-4d51-93da-87c1be371766-kube-api-access-94lj2\") pod \"whisker-59989b4778-vds2z\" (UID: \"5231cdcb-289f-4d51-93da-87c1be371766\") " pod="calico-system/whisker-59989b4778-vds2z" Jul 10 05:40:26.777928 kubelet[2704]: I0710 05:40:26.776916 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14665b78-b91b-4ca7-a8cb-a324d20217da-config-volume\") pod \"coredns-668d6bf9bc-r2mz9\" (UID: \"14665b78-b91b-4ca7-a8cb-a324d20217da\") " pod="kube-system/coredns-668d6bf9bc-r2mz9" Jul 10 05:40:26.777928 kubelet[2704]: I0710 05:40:26.776948 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff545102-cf9a-4e56-9635-710d478066a0-calico-apiserver-certs\") pod \"calico-apiserver-7cdc6c9c96-gzmm7\" (UID: \"ff545102-cf9a-4e56-9635-710d478066a0\") " pod="calico-apiserver/calico-apiserver-7cdc6c9c96-gzmm7" Jul 10 05:40:26.777928 kubelet[2704]: I0710 05:40:26.776961 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5231cdcb-289f-4d51-93da-87c1be371766-whisker-backend-key-pair\") pod \"whisker-59989b4778-vds2z\" (UID: \"5231cdcb-289f-4d51-93da-87c1be371766\") " pod="calico-system/whisker-59989b4778-vds2z" Jul 10 05:40:26.930973 containerd[1580]: time="2025-07-10T05:40:26.930917439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rw944,Uid:7b0f0fc2-f807-4f80-a78f-2270dc4cd424,Namespace:kube-system,Attempt:0,}" Jul 10 05:40:26.967673 containerd[1580]: time="2025-07-10T05:40:26.967492159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf7667d-575nv,Uid:8f75337f-497e-4574-bcdc-82bd81b25109,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:26.973159 containerd[1580]: time="2025-07-10T05:40:26.973119518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2mz9,Uid:14665b78-b91b-4ca7-a8cb-a324d20217da,Namespace:kube-system,Attempt:0,}" Jul 10 05:40:26.979830 containerd[1580]: time="2025-07-10T05:40:26.979763794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-gzmm7,Uid:ff545102-cf9a-4e56-9635-710d478066a0,Namespace:calico-apiserver,Attempt:0,}" Jul 10 05:40:26.982486 containerd[1580]: time="2025-07-10T05:40:26.982432761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-8wm6d,Uid:728fba3f-2ffd-498a-a284-400bc31893bf,Namespace:calico-apiserver,Attempt:0,}" Jul 10 05:40:26.987367 containerd[1580]: time="2025-07-10T05:40:26.987319265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4rvm9,Uid:bd991b85-9626-41bd-9812-76925fc7726a,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:26.991295 containerd[1580]: time="2025-07-10T05:40:26.991257893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59989b4778-vds2z,Uid:5231cdcb-289f-4d51-93da-87c1be371766,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:27.112195 containerd[1580]: time="2025-07-10T05:40:27.111974963Z" level=error msg="Failed to destroy network for sandbox \"05b976aa638cb8a4ff25bd09c6ceda4aafbee4c97d6d407399f39805a92241fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.114581 containerd[1580]: time="2025-07-10T05:40:27.114515406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rw944,Uid:7b0f0fc2-f807-4f80-a78f-2270dc4cd424,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b976aa638cb8a4ff25bd09c6ceda4aafbee4c97d6d407399f39805a92241fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.120179 containerd[1580]: time="2025-07-10T05:40:27.119721148Z" level=error msg="Failed to destroy network for sandbox \"a6184e436c9e94ff22aae6ae95b15b4db75ea92de2d09bac5642a6cc91bcca49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.124200 kubelet[2704]: E0710 05:40:27.124140 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b976aa638cb8a4ff25bd09c6ceda4aafbee4c97d6d407399f39805a92241fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.124294 kubelet[2704]: E0710 05:40:27.124239 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b976aa638cb8a4ff25bd09c6ceda4aafbee4c97d6d407399f39805a92241fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rw944" Jul 10 05:40:27.124294 kubelet[2704]: E0710 05:40:27.124265 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b976aa638cb8a4ff25bd09c6ceda4aafbee4c97d6d407399f39805a92241fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rw944" Jul 10 05:40:27.124355 kubelet[2704]: E0710 05:40:27.124315 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rw944_kube-system(7b0f0fc2-f807-4f80-a78f-2270dc4cd424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rw944_kube-system(7b0f0fc2-f807-4f80-a78f-2270dc4cd424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05b976aa638cb8a4ff25bd09c6ceda4aafbee4c97d6d407399f39805a92241fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rw944" podUID="7b0f0fc2-f807-4f80-a78f-2270dc4cd424" Jul 10 05:40:27.127755 containerd[1580]: time="2025-07-10T05:40:27.127714501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-gzmm7,Uid:ff545102-cf9a-4e56-9635-710d478066a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6184e436c9e94ff22aae6ae95b15b4db75ea92de2d09bac5642a6cc91bcca49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.127905 kubelet[2704]: E0710 05:40:27.127880 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6184e436c9e94ff22aae6ae95b15b4db75ea92de2d09bac5642a6cc91bcca49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.127941 kubelet[2704]: E0710 05:40:27.127915 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6184e436c9e94ff22aae6ae95b15b4db75ea92de2d09bac5642a6cc91bcca49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-gzmm7" Jul 10 05:40:27.127941 kubelet[2704]: E0710 05:40:27.127935 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6184e436c9e94ff22aae6ae95b15b4db75ea92de2d09bac5642a6cc91bcca49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-gzmm7" Jul 10 05:40:27.128175 kubelet[2704]: E0710 05:40:27.127971 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cdc6c9c96-gzmm7_calico-apiserver(ff545102-cf9a-4e56-9635-710d478066a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cdc6c9c96-gzmm7_calico-apiserver(ff545102-cf9a-4e56-9635-710d478066a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6184e436c9e94ff22aae6ae95b15b4db75ea92de2d09bac5642a6cc91bcca49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-gzmm7" podUID="ff545102-cf9a-4e56-9635-710d478066a0" Jul 10 05:40:27.129096 containerd[1580]: time="2025-07-10T05:40:27.129062148Z" level=error msg="Failed to destroy network for sandbox \"5ab2a93115d966f7bae6c7a7f515561d1514f3907e154fae81b0adc0b9e63f3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.130391 containerd[1580]: time="2025-07-10T05:40:27.130358018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf7667d-575nv,Uid:8f75337f-497e-4574-bcdc-82bd81b25109,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab2a93115d966f7bae6c7a7f515561d1514f3907e154fae81b0adc0b9e63f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.130729 kubelet[2704]: E0710 05:40:27.130632 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab2a93115d966f7bae6c7a7f515561d1514f3907e154fae81b0adc0b9e63f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.130804 kubelet[2704]: E0710 05:40:27.130764 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab2a93115d966f7bae6c7a7f515561d1514f3907e154fae81b0adc0b9e63f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bbf7667d-575nv" Jul 10 05:40:27.130832 kubelet[2704]: E0710 05:40:27.130784 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab2a93115d966f7bae6c7a7f515561d1514f3907e154fae81b0adc0b9e63f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bbf7667d-575nv" Jul 10 05:40:27.130892 kubelet[2704]: E0710 05:40:27.130850 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bbf7667d-575nv_calico-system(8f75337f-497e-4574-bcdc-82bd81b25109)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bbf7667d-575nv_calico-system(8f75337f-497e-4574-bcdc-82bd81b25109)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ab2a93115d966f7bae6c7a7f515561d1514f3907e154fae81b0adc0b9e63f3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bbf7667d-575nv" podUID="8f75337f-497e-4574-bcdc-82bd81b25109" Jul 10 05:40:27.133925 containerd[1580]: time="2025-07-10T05:40:27.133866465Z" level=error msg="Failed to destroy network for sandbox \"5cdb692c6b7ef4c8042e07f06a5362aea67ad97e2f8d70bf6f1b90f9b029951a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.135392 containerd[1580]: time="2025-07-10T05:40:27.135345289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-8wm6d,Uid:728fba3f-2ffd-498a-a284-400bc31893bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cdb692c6b7ef4c8042e07f06a5362aea67ad97e2f8d70bf6f1b90f9b029951a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.135734 kubelet[2704]: E0710 05:40:27.135694 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cdb692c6b7ef4c8042e07f06a5362aea67ad97e2f8d70bf6f1b90f9b029951a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.135785 kubelet[2704]: E0710 05:40:27.135759 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cdb692c6b7ef4c8042e07f06a5362aea67ad97e2f8d70bf6f1b90f9b029951a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-8wm6d" Jul 10 05:40:27.135815 kubelet[2704]: E0710 05:40:27.135788 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cdb692c6b7ef4c8042e07f06a5362aea67ad97e2f8d70bf6f1b90f9b029951a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-8wm6d" Jul 10 05:40:27.135863 kubelet[2704]: E0710 05:40:27.135834 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cdc6c9c96-8wm6d_calico-apiserver(728fba3f-2ffd-498a-a284-400bc31893bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cdc6c9c96-8wm6d_calico-apiserver(728fba3f-2ffd-498a-a284-400bc31893bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cdb692c6b7ef4c8042e07f06a5362aea67ad97e2f8d70bf6f1b90f9b029951a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-8wm6d" podUID="728fba3f-2ffd-498a-a284-400bc31893bf" Jul 10 05:40:27.138603 containerd[1580]: time="2025-07-10T05:40:27.138542069Z" level=error msg="Failed to destroy network for sandbox \"88a26334259f08df82a2d4327aac51991a07111ee7cef5bab0a333668b94fb9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.141140 containerd[1580]: time="2025-07-10T05:40:27.141104364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2mz9,Uid:14665b78-b91b-4ca7-a8cb-a324d20217da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a26334259f08df82a2d4327aac51991a07111ee7cef5bab0a333668b94fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.141538 kubelet[2704]: E0710 05:40:27.141436 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a26334259f08df82a2d4327aac51991a07111ee7cef5bab0a333668b94fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.141538 kubelet[2704]: E0710 05:40:27.141527 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a26334259f08df82a2d4327aac51991a07111ee7cef5bab0a333668b94fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r2mz9" Jul 10 05:40:27.141694 kubelet[2704]: E0710 05:40:27.141548 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a26334259f08df82a2d4327aac51991a07111ee7cef5bab0a333668b94fb9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r2mz9" Jul 10 05:40:27.141694 kubelet[2704]: E0710 05:40:27.141635 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r2mz9_kube-system(14665b78-b91b-4ca7-a8cb-a324d20217da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r2mz9_kube-system(14665b78-b91b-4ca7-a8cb-a324d20217da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88a26334259f08df82a2d4327aac51991a07111ee7cef5bab0a333668b94fb9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-r2mz9" podUID="14665b78-b91b-4ca7-a8cb-a324d20217da" Jul 10 05:40:27.146082 containerd[1580]: time="2025-07-10T05:40:27.146044056Z" level=error msg="Failed to destroy network for sandbox \"e81d7cf1bad317988a6e8980726472989ae51e2cf7af9e8eec76bbec585c5c96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.147266 containerd[1580]: time="2025-07-10T05:40:27.147224599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4rvm9,Uid:bd991b85-9626-41bd-9812-76925fc7726a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81d7cf1bad317988a6e8980726472989ae51e2cf7af9e8eec76bbec585c5c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.147410 kubelet[2704]: E0710 05:40:27.147378 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81d7cf1bad317988a6e8980726472989ae51e2cf7af9e8eec76bbec585c5c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.147476 kubelet[2704]: E0710 05:40:27.147444 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81d7cf1bad317988a6e8980726472989ae51e2cf7af9e8eec76bbec585c5c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4rvm9" Jul 10 05:40:27.147504 kubelet[2704]: E0710 05:40:27.147483 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e81d7cf1bad317988a6e8980726472989ae51e2cf7af9e8eec76bbec585c5c96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4rvm9" Jul 10 05:40:27.147549 kubelet[2704]: E0710 05:40:27.147522 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-4rvm9_calico-system(bd991b85-9626-41bd-9812-76925fc7726a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-4rvm9_calico-system(bd991b85-9626-41bd-9812-76925fc7726a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e81d7cf1bad317988a6e8980726472989ae51e2cf7af9e8eec76bbec585c5c96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-4rvm9" podUID="bd991b85-9626-41bd-9812-76925fc7726a" Jul 10 05:40:27.151747 containerd[1580]: time="2025-07-10T05:40:27.151700066Z" level=error msg="Failed to destroy network for sandbox \"fcdcd229ce35aa199866a1b1f53e53b187412d080ac77fa36a6370bb78dd8807\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.153009 containerd[1580]: time="2025-07-10T05:40:27.152968044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59989b4778-vds2z,Uid:5231cdcb-289f-4d51-93da-87c1be371766,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdcd229ce35aa199866a1b1f53e53b187412d080ac77fa36a6370bb78dd8807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.153231 kubelet[2704]: E0710 05:40:27.153202 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdcd229ce35aa199866a1b1f53e53b187412d080ac77fa36a6370bb78dd8807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.153273 kubelet[2704]: E0710 05:40:27.153237 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdcd229ce35aa199866a1b1f53e53b187412d080ac77fa36a6370bb78dd8807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59989b4778-vds2z" Jul 10 05:40:27.153273 kubelet[2704]: E0710 05:40:27.153252 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcdcd229ce35aa199866a1b1f53e53b187412d080ac77fa36a6370bb78dd8807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59989b4778-vds2z" Jul 10 05:40:27.153328 kubelet[2704]: E0710 05:40:27.153283 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59989b4778-vds2z_calico-system(5231cdcb-289f-4d51-93da-87c1be371766)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59989b4778-vds2z_calico-system(5231cdcb-289f-4d51-93da-87c1be371766)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcdcd229ce35aa199866a1b1f53e53b187412d080ac77fa36a6370bb78dd8807\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59989b4778-vds2z" podUID="5231cdcb-289f-4d51-93da-87c1be371766" Jul 10 05:40:27.284703 systemd[1]: Created slice kubepods-besteffort-pod04ca591d_8202_4053_a596_a5753a64e21d.slice - libcontainer container kubepods-besteffort-pod04ca591d_8202_4053_a596_a5753a64e21d.slice. Jul 10 05:40:27.287251 containerd[1580]: time="2025-07-10T05:40:27.287216908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rq8bf,Uid:04ca591d-8202-4053-a596-a5753a64e21d,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:27.339130 containerd[1580]: time="2025-07-10T05:40:27.339073286Z" level=error msg="Failed to destroy network for sandbox \"ed5ffb1e24d792704b5252758c9e1cc4662bf69475d05348b2ef1bbd290f7c11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.340608 containerd[1580]: time="2025-07-10T05:40:27.340527324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rq8bf,Uid:04ca591d-8202-4053-a596-a5753a64e21d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed5ffb1e24d792704b5252758c9e1cc4662bf69475d05348b2ef1bbd290f7c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.340825 kubelet[2704]: E0710 05:40:27.340776 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed5ffb1e24d792704b5252758c9e1cc4662bf69475d05348b2ef1bbd290f7c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 05:40:27.340875 kubelet[2704]: E0710 05:40:27.340836 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed5ffb1e24d792704b5252758c9e1cc4662bf69475d05348b2ef1bbd290f7c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rq8bf" Jul 10 05:40:27.340875 kubelet[2704]: E0710 05:40:27.340858 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed5ffb1e24d792704b5252758c9e1cc4662bf69475d05348b2ef1bbd290f7c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rq8bf" Jul 10 05:40:27.340954 kubelet[2704]: E0710 05:40:27.340910 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rq8bf_calico-system(04ca591d-8202-4053-a596-a5753a64e21d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rq8bf_calico-system(04ca591d-8202-4053-a596-a5753a64e21d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed5ffb1e24d792704b5252758c9e1cc4662bf69475d05348b2ef1bbd290f7c11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rq8bf" podUID="04ca591d-8202-4053-a596-a5753a64e21d" Jul 10 05:40:27.401694 containerd[1580]: time="2025-07-10T05:40:27.400158982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 05:40:31.932291 kubelet[2704]: I0710 05:40:31.932223 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 05:40:33.836826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869938180.mount: Deactivated successfully. Jul 10 05:40:34.477660 containerd[1580]: time="2025-07-10T05:40:34.477592682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:34.478459 containerd[1580]: time="2025-07-10T05:40:34.478373991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 05:40:34.479844 containerd[1580]: time="2025-07-10T05:40:34.479779904Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:34.492111 containerd[1580]: time="2025-07-10T05:40:34.492061476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:34.492617 containerd[1580]: time="2025-07-10T05:40:34.492573029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.092359284s" Jul 10 05:40:34.492617 containerd[1580]: time="2025-07-10T05:40:34.492606932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 05:40:34.503547 containerd[1580]: time="2025-07-10T05:40:34.503506125Z" level=info msg="CreateContainer within sandbox \"521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 05:40:34.528324 containerd[1580]: time="2025-07-10T05:40:34.528271531Z" level=info msg="Container b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:34.541193 containerd[1580]: time="2025-07-10T05:40:34.541141219Z" level=info msg="CreateContainer within sandbox \"521a7946be8b7066373cfa2b90f485c1ad2b1764681d47fca5412834790c7940\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd\"" Jul 10 05:40:34.541861 containerd[1580]: time="2025-07-10T05:40:34.541836396Z" level=info msg="StartContainer for \"b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd\"" Jul 10 05:40:34.543224 containerd[1580]: time="2025-07-10T05:40:34.543184301Z" level=info msg="connecting to shim b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd" address="unix:///run/containerd/s/cc11e7b00039f4bd63011532dc882e868f122d7e95d98c051a4dbee4b6c0f2b7" protocol=ttrpc version=3 Jul 10 05:40:34.571676 systemd[1]: Started cri-containerd-b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd.scope - libcontainer container b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd. Jul 10 05:40:34.624803 containerd[1580]: time="2025-07-10T05:40:34.624746567Z" level=info msg="StartContainer for \"b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd\" returns successfully" Jul 10 05:40:34.708502 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 05:40:34.709284 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 05:40:34.929400 kubelet[2704]: I0710 05:40:34.929339 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5231cdcb-289f-4d51-93da-87c1be371766-whisker-ca-bundle\") pod \"5231cdcb-289f-4d51-93da-87c1be371766\" (UID: \"5231cdcb-289f-4d51-93da-87c1be371766\") " Jul 10 05:40:34.929400 kubelet[2704]: I0710 05:40:34.929378 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94lj2\" (UniqueName: \"kubernetes.io/projected/5231cdcb-289f-4d51-93da-87c1be371766-kube-api-access-94lj2\") pod \"5231cdcb-289f-4d51-93da-87c1be371766\" (UID: \"5231cdcb-289f-4d51-93da-87c1be371766\") " Jul 10 05:40:34.929400 kubelet[2704]: I0710 05:40:34.929407 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5231cdcb-289f-4d51-93da-87c1be371766-whisker-backend-key-pair\") pod \"5231cdcb-289f-4d51-93da-87c1be371766\" (UID: \"5231cdcb-289f-4d51-93da-87c1be371766\") " Jul 10 05:40:34.930521 kubelet[2704]: I0710 05:40:34.930310 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5231cdcb-289f-4d51-93da-87c1be371766-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5231cdcb-289f-4d51-93da-87c1be371766" (UID: "5231cdcb-289f-4d51-93da-87c1be371766"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 05:40:34.933220 kubelet[2704]: I0710 05:40:34.933182 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5231cdcb-289f-4d51-93da-87c1be371766-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5231cdcb-289f-4d51-93da-87c1be371766" (UID: "5231cdcb-289f-4d51-93da-87c1be371766"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 05:40:34.933735 kubelet[2704]: I0710 05:40:34.933693 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5231cdcb-289f-4d51-93da-87c1be371766-kube-api-access-94lj2" (OuterVolumeSpecName: "kube-api-access-94lj2") pod "5231cdcb-289f-4d51-93da-87c1be371766" (UID: "5231cdcb-289f-4d51-93da-87c1be371766"). InnerVolumeSpecName "kube-api-access-94lj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 05:40:34.934949 systemd[1]: var-lib-kubelet-pods-5231cdcb\x2d289f\x2d4d51\x2d93da\x2d87c1be371766-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94lj2.mount: Deactivated successfully. Jul 10 05:40:34.935094 systemd[1]: var-lib-kubelet-pods-5231cdcb\x2d289f\x2d4d51\x2d93da\x2d87c1be371766-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 05:40:35.030280 kubelet[2704]: I0710 05:40:35.030197 2704 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5231cdcb-289f-4d51-93da-87c1be371766-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 05:40:35.030280 kubelet[2704]: I0710 05:40:35.030234 2704 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94lj2\" (UniqueName: \"kubernetes.io/projected/5231cdcb-289f-4d51-93da-87c1be371766-kube-api-access-94lj2\") on node \"localhost\" DevicePath \"\"" Jul 10 05:40:35.030280 kubelet[2704]: I0710 05:40:35.030245 2704 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5231cdcb-289f-4d51-93da-87c1be371766-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 05:40:35.434698 systemd[1]: Removed slice kubepods-besteffort-pod5231cdcb_289f_4d51_93da_87c1be371766.slice - libcontainer container kubepods-besteffort-pod5231cdcb_289f_4d51_93da_87c1be371766.slice. Jul 10 05:40:35.445284 kubelet[2704]: I0710 05:40:35.445209 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p7gwd" podStartSLOduration=1.982276826 podStartE2EDuration="20.445178033s" podCreationTimestamp="2025-07-10 05:40:15 +0000 UTC" firstStartedPulling="2025-07-10 05:40:16.030371617 +0000 UTC m=+17.892187103" lastFinishedPulling="2025-07-10 05:40:34.493272824 +0000 UTC m=+36.355088310" observedRunningTime="2025-07-10 05:40:35.44330391 +0000 UTC m=+37.305119416" watchObservedRunningTime="2025-07-10 05:40:35.445178033 +0000 UTC m=+37.306993519" Jul 10 05:40:35.493797 systemd[1]: Created slice kubepods-besteffort-pod125c092a_5fc7_4d94_a878_0d116ddfe5d0.slice - libcontainer container kubepods-besteffort-pod125c092a_5fc7_4d94_a878_0d116ddfe5d0.slice. Jul 10 05:40:35.532938 kubelet[2704]: I0710 05:40:35.532862 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/125c092a-5fc7-4d94-a878-0d116ddfe5d0-whisker-backend-key-pair\") pod \"whisker-756b6ccd5d-xchpp\" (UID: \"125c092a-5fc7-4d94-a878-0d116ddfe5d0\") " pod="calico-system/whisker-756b6ccd5d-xchpp" Jul 10 05:40:35.532938 kubelet[2704]: I0710 05:40:35.532927 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rg2n\" (UniqueName: \"kubernetes.io/projected/125c092a-5fc7-4d94-a878-0d116ddfe5d0-kube-api-access-2rg2n\") pod \"whisker-756b6ccd5d-xchpp\" (UID: \"125c092a-5fc7-4d94-a878-0d116ddfe5d0\") " pod="calico-system/whisker-756b6ccd5d-xchpp" Jul 10 05:40:35.532938 kubelet[2704]: I0710 05:40:35.532949 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125c092a-5fc7-4d94-a878-0d116ddfe5d0-whisker-ca-bundle\") pod \"whisker-756b6ccd5d-xchpp\" (UID: \"125c092a-5fc7-4d94-a878-0d116ddfe5d0\") " pod="calico-system/whisker-756b6ccd5d-xchpp" Jul 10 05:40:35.798156 containerd[1580]: time="2025-07-10T05:40:35.798096277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756b6ccd5d-xchpp,Uid:125c092a-5fc7-4d94-a878-0d116ddfe5d0,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:35.997428 systemd-networkd[1480]: cali013c703351d: Link UP Jul 10 05:40:35.999708 systemd-networkd[1480]: cali013c703351d: Gained carrier Jul 10 05:40:36.025796 containerd[1580]: 2025-07-10 05:40:35.864 [INFO][3934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 05:40:36.025796 containerd[1580]: 2025-07-10 05:40:35.882 [INFO][3934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--756b6ccd5d--xchpp-eth0 whisker-756b6ccd5d- calico-system 125c092a-5fc7-4d94-a878-0d116ddfe5d0 869 0 2025-07-10 05:40:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:756b6ccd5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-756b6ccd5d-xchpp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali013c703351d [] [] }} ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-" Jul 10 05:40:36.025796 containerd[1580]: 2025-07-10 05:40:35.882 [INFO][3934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" Jul 10 05:40:36.025796 containerd[1580]: 2025-07-10 05:40:35.943 [INFO][3948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" HandleID="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Workload="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.943 [INFO][3948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" HandleID="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Workload="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001388b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-756b6ccd5d-xchpp", "timestamp":"2025-07-10 05:40:35.94329434 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.943 [INFO][3948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.944 [INFO][3948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.944 [INFO][3948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.952 [INFO][3948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" host="localhost" Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.957 [INFO][3948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.962 [INFO][3948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.963 [INFO][3948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.965 [INFO][3948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:36.026039 containerd[1580]: 2025-07-10 05:40:35.965 [INFO][3948] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" host="localhost" Jul 10 05:40:36.026307 containerd[1580]: 2025-07-10 05:40:35.968 [INFO][3948] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c Jul 10 05:40:36.026307 containerd[1580]: 2025-07-10 05:40:35.973 [INFO][3948] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" host="localhost" Jul 10 05:40:36.026307 containerd[1580]: 2025-07-10 05:40:35.978 [INFO][3948] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" host="localhost" Jul 10 05:40:36.026307 containerd[1580]: 2025-07-10 05:40:35.978 [INFO][3948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" host="localhost" Jul 10 05:40:36.026307 containerd[1580]: 2025-07-10 05:40:35.979 [INFO][3948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:36.026307 containerd[1580]: 2025-07-10 05:40:35.979 [INFO][3948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" HandleID="k8s-pod-network.660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Workload="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" Jul 10 05:40:36.026458 containerd[1580]: 2025-07-10 05:40:35.986 [INFO][3934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--756b6ccd5d--xchpp-eth0", GenerateName:"whisker-756b6ccd5d-", Namespace:"calico-system", SelfLink:"", UID:"125c092a-5fc7-4d94-a878-0d116ddfe5d0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"756b6ccd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-756b6ccd5d-xchpp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali013c703351d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:36.026458 containerd[1580]: 2025-07-10 05:40:35.986 [INFO][3934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" Jul 10 05:40:36.028510 containerd[1580]: 2025-07-10 05:40:35.986 [INFO][3934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali013c703351d ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" Jul 10 05:40:36.028510 containerd[1580]: 2025-07-10 05:40:35.997 [INFO][3934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" Jul 10 05:40:36.028579 containerd[1580]: 2025-07-10 05:40:35.997 [INFO][3934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--756b6ccd5d--xchpp-eth0", GenerateName:"whisker-756b6ccd5d-", Namespace:"calico-system", SelfLink:"", UID:"125c092a-5fc7-4d94-a878-0d116ddfe5d0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"756b6ccd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c", Pod:"whisker-756b6ccd5d-xchpp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali013c703351d", MAC:"d2:25:59:2e:84:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:36.028632 containerd[1580]: 2025-07-10 05:40:36.016 [INFO][3934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" Namespace="calico-system" Pod="whisker-756b6ccd5d-xchpp" WorkloadEndpoint="localhost-k8s-whisker--756b6ccd5d--xchpp-eth0" Jul 10 05:40:36.216179 containerd[1580]: time="2025-07-10T05:40:36.216060303Z" level=info msg="connecting to shim 660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c" address="unix:///run/containerd/s/848671b6cf20fabe8c1fcebaae1030b746d1bc14e2f55cf0c0ab81a48304759e" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:36.281866 kubelet[2704]: I0710 05:40:36.281812 2704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5231cdcb-289f-4d51-93da-87c1be371766" path="/var/lib/kubelet/pods/5231cdcb-289f-4d51-93da-87c1be371766/volumes" Jul 10 05:40:36.327639 systemd[1]: Started cri-containerd-660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c.scope - libcontainer container 660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c. Jul 10 05:40:36.340777 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:36.376515 containerd[1580]: time="2025-07-10T05:40:36.376430242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756b6ccd5d-xchpp,Uid:125c092a-5fc7-4d94-a878-0d116ddfe5d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c\"" Jul 10 05:40:36.378602 containerd[1580]: time="2025-07-10T05:40:36.378551830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 05:40:36.527450 systemd-networkd[1480]: vxlan.calico: Link UP Jul 10 05:40:36.527540 systemd-networkd[1480]: vxlan.calico: Gained carrier Jul 10 05:40:37.518621 systemd-networkd[1480]: cali013c703351d: Gained IPv6LL Jul 10 05:40:37.521424 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:46450.service - OpenSSH per-connection server daemon (10.0.0.1:46450). Jul 10 05:40:37.587129 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 46450 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:40:37.588965 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:40:37.593686 systemd-logind[1547]: New session 8 of user core. Jul 10 05:40:37.601595 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 05:40:37.745334 sshd[4220]: Connection closed by 10.0.0.1 port 46450 Jul 10 05:40:37.745687 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:37.750666 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:46450.service: Deactivated successfully. Jul 10 05:40:37.752872 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 05:40:37.753695 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Jul 10 05:40:37.754955 systemd-logind[1547]: Removed session 8. Jul 10 05:40:37.838698 systemd-networkd[1480]: vxlan.calico: Gained IPv6LL Jul 10 05:40:38.220691 containerd[1580]: time="2025-07-10T05:40:38.220567094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:38.221289 containerd[1580]: time="2025-07-10T05:40:38.221257131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 05:40:38.222357 containerd[1580]: time="2025-07-10T05:40:38.222307425Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:38.224221 containerd[1580]: time="2025-07-10T05:40:38.224190313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:38.224748 containerd[1580]: time="2025-07-10T05:40:38.224713295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.846102305s" Jul 10 05:40:38.224748 containerd[1580]: time="2025-07-10T05:40:38.224745827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 05:40:38.226584 containerd[1580]: time="2025-07-10T05:40:38.226545979Z" level=info msg="CreateContainer within sandbox \"660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 05:40:38.234689 containerd[1580]: time="2025-07-10T05:40:38.234652072Z" level=info msg="Container a01e0158a64361a756f7ba2597046cda0d338ccdfadb130458596b9293cdecdc: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:38.245638 containerd[1580]: time="2025-07-10T05:40:38.245585617Z" level=info msg="CreateContainer within sandbox \"660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a01e0158a64361a756f7ba2597046cda0d338ccdfadb130458596b9293cdecdc\"" Jul 10 05:40:38.246613 containerd[1580]: time="2025-07-10T05:40:38.246091017Z" level=info msg="StartContainer for \"a01e0158a64361a756f7ba2597046cda0d338ccdfadb130458596b9293cdecdc\"" Jul 10 05:40:38.247308 containerd[1580]: time="2025-07-10T05:40:38.247275953Z" level=info msg="connecting to shim a01e0158a64361a756f7ba2597046cda0d338ccdfadb130458596b9293cdecdc" address="unix:///run/containerd/s/848671b6cf20fabe8c1fcebaae1030b746d1bc14e2f55cf0c0ab81a48304759e" protocol=ttrpc version=3 Jul 10 05:40:38.274608 systemd[1]: Started cri-containerd-a01e0158a64361a756f7ba2597046cda0d338ccdfadb130458596b9293cdecdc.scope - libcontainer container a01e0158a64361a756f7ba2597046cda0d338ccdfadb130458596b9293cdecdc. Jul 10 05:40:38.320946 containerd[1580]: time="2025-07-10T05:40:38.320897629Z" level=info msg="StartContainer for \"a01e0158a64361a756f7ba2597046cda0d338ccdfadb130458596b9293cdecdc\" returns successfully" Jul 10 05:40:38.322365 containerd[1580]: time="2025-07-10T05:40:38.322321304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 05:40:39.279037 containerd[1580]: time="2025-07-10T05:40:39.278955179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf7667d-575nv,Uid:8f75337f-497e-4574-bcdc-82bd81b25109,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:39.279574 containerd[1580]: time="2025-07-10T05:40:39.279168089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rq8bf,Uid:04ca591d-8202-4053-a596-a5753a64e21d,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:39.422686 systemd-networkd[1480]: cali1c6d4e84b74: Link UP Jul 10 05:40:39.423087 systemd-networkd[1480]: cali1c6d4e84b74: Gained carrier Jul 10 05:40:39.437543 containerd[1580]: 2025-07-10 05:40:39.331 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0 calico-kube-controllers-bbf7667d- calico-system 8f75337f-497e-4574-bcdc-82bd81b25109 793 0 2025-07-10 05:40:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bbf7667d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bbf7667d-575nv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c6d4e84b74 [] [] }} ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-" Jul 10 05:40:39.437543 containerd[1580]: 2025-07-10 05:40:39.332 [INFO][4280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" Jul 10 05:40:39.437543 containerd[1580]: 2025-07-10 05:40:39.374 [INFO][4302] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" HandleID="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Workload="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.374 [INFO][4302] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" HandleID="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Workload="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bbf7667d-575nv", "timestamp":"2025-07-10 05:40:39.374211175 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.374 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.374 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.374 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.386 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" host="localhost" Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.391 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.399 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.401 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.403 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:39.437816 containerd[1580]: 2025-07-10 05:40:39.403 [INFO][4302] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" host="localhost" Jul 10 05:40:39.438042 containerd[1580]: 2025-07-10 05:40:39.405 [INFO][4302] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711 Jul 10 05:40:39.438042 containerd[1580]: 2025-07-10 05:40:39.408 [INFO][4302] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" host="localhost" Jul 10 05:40:39.438042 containerd[1580]: 2025-07-10 05:40:39.414 [INFO][4302] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" host="localhost" Jul 10 05:40:39.438042 containerd[1580]: 2025-07-10 05:40:39.414 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" host="localhost" Jul 10 05:40:39.438042 containerd[1580]: 2025-07-10 05:40:39.414 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:39.438042 containerd[1580]: 2025-07-10 05:40:39.414 [INFO][4302] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" HandleID="k8s-pod-network.675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Workload="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" Jul 10 05:40:39.438170 containerd[1580]: 2025-07-10 05:40:39.418 [INFO][4280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0", GenerateName:"calico-kube-controllers-bbf7667d-", Namespace:"calico-system", SelfLink:"", UID:"8f75337f-497e-4574-bcdc-82bd81b25109", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bbf7667d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bbf7667d-575nv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c6d4e84b74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:39.438220 containerd[1580]: 2025-07-10 05:40:39.418 [INFO][4280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" Jul 10 05:40:39.438220 containerd[1580]: 2025-07-10 05:40:39.418 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c6d4e84b74 ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" Jul 10 05:40:39.438220 containerd[1580]: 2025-07-10 05:40:39.423 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" Jul 10 05:40:39.438289 containerd[1580]: 2025-07-10 05:40:39.423 [INFO][4280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0", GenerateName:"calico-kube-controllers-bbf7667d-", Namespace:"calico-system", SelfLink:"", UID:"8f75337f-497e-4574-bcdc-82bd81b25109", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bbf7667d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711", Pod:"calico-kube-controllers-bbf7667d-575nv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c6d4e84b74", MAC:"a2:27:f8:f5:49:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:39.438340 containerd[1580]: 2025-07-10 05:40:39.433 [INFO][4280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" Namespace="calico-system" Pod="calico-kube-controllers-bbf7667d-575nv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bbf7667d--575nv-eth0" Jul 10 05:40:39.687985 systemd-networkd[1480]: calia961d8a8eae: Link UP Jul 10 05:40:39.690090 systemd-networkd[1480]: calia961d8a8eae: Gained carrier Jul 10 05:40:39.705016 containerd[1580]: 2025-07-10 05:40:39.334 [INFO][4279] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rq8bf-eth0 csi-node-driver- calico-system 04ca591d-8202-4053-a596-a5753a64e21d 680 0 2025-07-10 05:40:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rq8bf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia961d8a8eae [] [] }} ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-" Jul 10 05:40:39.705016 containerd[1580]: 2025-07-10 05:40:39.334 [INFO][4279] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-eth0" Jul 10 05:40:39.705016 containerd[1580]: 2025-07-10 05:40:39.373 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" HandleID="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Workload="localhost-k8s-csi--node--driver--rq8bf-eth0" Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.374 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" HandleID="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Workload="localhost-k8s-csi--node--driver--rq8bf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c67b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rq8bf", "timestamp":"2025-07-10 05:40:39.373487136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.377 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.414 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.415 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.525 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" host="localhost" Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.549 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.554 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.556 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.558 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:39.705288 containerd[1580]: 2025-07-10 05:40:39.558 [INFO][4309] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" host="localhost" Jul 10 05:40:39.705948 containerd[1580]: 2025-07-10 05:40:39.560 [INFO][4309] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796 Jul 10 05:40:39.705948 containerd[1580]: 2025-07-10 05:40:39.604 [INFO][4309] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" host="localhost" Jul 10 05:40:39.705948 containerd[1580]: 2025-07-10 05:40:39.680 [INFO][4309] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" host="localhost" Jul 10 05:40:39.705948 containerd[1580]: 2025-07-10 05:40:39.680 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" host="localhost" Jul 10 05:40:39.705948 containerd[1580]: 2025-07-10 05:40:39.680 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:39.705948 containerd[1580]: 2025-07-10 05:40:39.680 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" HandleID="k8s-pod-network.f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Workload="localhost-k8s-csi--node--driver--rq8bf-eth0" Jul 10 05:40:39.706212 containerd[1580]: 2025-07-10 05:40:39.684 [INFO][4279] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rq8bf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04ca591d-8202-4053-a596-a5753a64e21d", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rq8bf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia961d8a8eae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:39.706281 containerd[1580]: 2025-07-10 05:40:39.684 [INFO][4279] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-eth0" Jul 10 05:40:39.706281 containerd[1580]: 2025-07-10 05:40:39.684 [INFO][4279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia961d8a8eae ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-eth0" Jul 10 05:40:39.706281 containerd[1580]: 2025-07-10 05:40:39.689 [INFO][4279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-eth0" Jul 10 05:40:39.706344 containerd[1580]: 2025-07-10 05:40:39.691 [INFO][4279] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rq8bf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04ca591d-8202-4053-a596-a5753a64e21d", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796", Pod:"csi-node-driver-rq8bf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia961d8a8eae", MAC:"d2:bd:d3:ae:94:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:39.706410 containerd[1580]: 2025-07-10 05:40:39.700 [INFO][4279] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" Namespace="calico-system" Pod="csi-node-driver-rq8bf" WorkloadEndpoint="localhost-k8s-csi--node--driver--rq8bf-eth0" Jul 10 05:40:39.726564 containerd[1580]: time="2025-07-10T05:40:39.726511200Z" level=info msg="connecting to shim 675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711" address="unix:///run/containerd/s/726aad2ec77846c9c5084dcb46e21ee2f9aae18f7feed861a552fc90148de32c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:39.747595 containerd[1580]: time="2025-07-10T05:40:39.747525654Z" level=info msg="connecting to shim f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796" address="unix:///run/containerd/s/056813f5daa0d0afc217e19e5ab2fbf87deababb8a1d5681f4c6641609313ceb" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:39.770642 systemd[1]: Started cri-containerd-675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711.scope - libcontainer container 675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711. Jul 10 05:40:39.794770 systemd[1]: Started cri-containerd-f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796.scope - libcontainer container f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796. Jul 10 05:40:39.800308 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:39.812559 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:39.832122 containerd[1580]: time="2025-07-10T05:40:39.832081756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rq8bf,Uid:04ca591d-8202-4053-a596-a5753a64e21d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796\"" Jul 10 05:40:39.839973 containerd[1580]: time="2025-07-10T05:40:39.839938128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bbf7667d-575nv,Uid:8f75337f-497e-4574-bcdc-82bd81b25109,Namespace:calico-system,Attempt:0,} returns sandbox id \"675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711\"" Jul 10 05:40:40.261655 kubelet[2704]: I0710 05:40:40.261595 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 05:40:40.281091 containerd[1580]: time="2025-07-10T05:40:40.281050561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4rvm9,Uid:bd991b85-9626-41bd-9812-76925fc7726a,Namespace:calico-system,Attempt:0,}" Jul 10 05:40:40.281659 containerd[1580]: time="2025-07-10T05:40:40.281116875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rw944,Uid:7b0f0fc2-f807-4f80-a78f-2270dc4cd424,Namespace:kube-system,Attempt:0,}" Jul 10 05:40:40.417432 containerd[1580]: time="2025-07-10T05:40:40.417367165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd\" id:\"b5ca0c10b69601c618100485e133979439e8c2a8210e7b3865220d63a23af4db\" pid:4464 exited_at:{seconds:1752126040 nanos:416786304}" Jul 10 05:40:40.423007 systemd-networkd[1480]: caliac5392e17a4: Link UP Jul 10 05:40:40.425010 systemd-networkd[1480]: caliac5392e17a4: Gained carrier Jul 10 05:40:40.487865 containerd[1580]: 2025-07-10 05:40:40.330 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rw944-eth0 coredns-668d6bf9bc- kube-system 7b0f0fc2-f807-4f80-a78f-2270dc4cd424 790 0 2025-07-10 05:40:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rw944 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac5392e17a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-" Jul 10 05:40:40.487865 containerd[1580]: 2025-07-10 05:40:40.331 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" Jul 10 05:40:40.487865 containerd[1580]: 2025-07-10 05:40:40.372 [INFO][4481] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" HandleID="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Workload="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.372 [INFO][4481] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" HandleID="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Workload="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b61c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rw944", "timestamp":"2025-07-10 05:40:40.37233826 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.372 [INFO][4481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.372 [INFO][4481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.372 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.379 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" host="localhost" Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.384 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.390 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.392 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.395 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:40.488185 containerd[1580]: 2025-07-10 05:40:40.395 [INFO][4481] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" host="localhost" Jul 10 05:40:40.488563 containerd[1580]: 2025-07-10 05:40:40.397 [INFO][4481] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb Jul 10 05:40:40.488563 containerd[1580]: 2025-07-10 05:40:40.402 [INFO][4481] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" host="localhost" Jul 10 05:40:40.488563 containerd[1580]: 2025-07-10 05:40:40.408 [INFO][4481] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" host="localhost" Jul 10 05:40:40.488563 containerd[1580]: 2025-07-10 05:40:40.408 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" host="localhost" Jul 10 05:40:40.488563 containerd[1580]: 2025-07-10 05:40:40.408 [INFO][4481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:40.488563 containerd[1580]: 2025-07-10 05:40:40.408 [INFO][4481] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" HandleID="k8s-pod-network.8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Workload="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" Jul 10 05:40:40.491901 containerd[1580]: 2025-07-10 05:40:40.417 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rw944-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b0f0fc2-f807-4f80-a78f-2270dc4cd424", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rw944", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac5392e17a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:40.492013 containerd[1580]: 2025-07-10 05:40:40.417 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" Jul 10 05:40:40.492013 containerd[1580]: 2025-07-10 05:40:40.417 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac5392e17a4 ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" Jul 10 05:40:40.492013 containerd[1580]: 2025-07-10 05:40:40.425 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" Jul 10 05:40:40.492095 containerd[1580]: 2025-07-10 05:40:40.426 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rw944-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b0f0fc2-f807-4f80-a78f-2270dc4cd424", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb", Pod:"coredns-668d6bf9bc-rw944", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac5392e17a4", MAC:"56:e8:4e:15:35:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:40.492095 containerd[1580]: 2025-07-10 05:40:40.482 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-rw944" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rw944-eth0" Jul 10 05:40:40.772491 containerd[1580]: time="2025-07-10T05:40:40.772419012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd\" id:\"f4aff57ecca28b461adacb034e6d620bc29001e0923461e7a08e9e188a4cbceb\" pid:4515 exited_at:{seconds:1752126040 nanos:768965023}" Jul 10 05:40:40.781026 systemd-networkd[1480]: calia6783cf7ea2: Link UP Jul 10 05:40:40.781219 systemd-networkd[1480]: calia6783cf7ea2: Gained carrier Jul 10 05:40:40.788515 systemd-networkd[1480]: calia961d8a8eae: Gained IPv6LL Jul 10 05:40:40.793769 containerd[1580]: time="2025-07-10T05:40:40.793708998Z" level=info msg="connecting to shim 8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb" address="unix:///run/containerd/s/e4f188429989026f263bebbb073ba7d9a1e3e4ff817514833ba8214dbb454b8c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.348 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0 goldmane-768f4c5c69- calico-system bd991b85-9626-41bd-9812-76925fc7726a 800 0 2025-07-10 05:40:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-4rvm9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia6783cf7ea2 [] [] }} ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.348 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.396 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" HandleID="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Workload="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.396 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" HandleID="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Workload="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000420090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-4rvm9", "timestamp":"2025-07-10 05:40:40.396074213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.396 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.408 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.408 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.480 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.515 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.752 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.754 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.757 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.757 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.759 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8 Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.764 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.769 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.769 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" host="localhost" Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.769 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:40.801311 containerd[1580]: 2025-07-10 05:40:40.769 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" HandleID="k8s-pod-network.8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Workload="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" Jul 10 05:40:40.801822 containerd[1580]: 2025-07-10 05:40:40.775 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd991b85-9626-41bd-9812-76925fc7726a", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-4rvm9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia6783cf7ea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:40.801822 containerd[1580]: 2025-07-10 05:40:40.776 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" Jul 10 05:40:40.801822 containerd[1580]: 2025-07-10 05:40:40.776 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6783cf7ea2 ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" Jul 10 05:40:40.801822 containerd[1580]: 2025-07-10 05:40:40.779 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" Jul 10 05:40:40.801822 containerd[1580]: 2025-07-10 05:40:40.779 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bd991b85-9626-41bd-9812-76925fc7726a", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8", Pod:"goldmane-768f4c5c69-4rvm9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia6783cf7ea2", MAC:"0a:0e:31:45:2e:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:40.801822 containerd[1580]: 2025-07-10 05:40:40.793 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" Namespace="calico-system" Pod="goldmane-768f4c5c69-4rvm9" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4rvm9-eth0" Jul 10 05:40:40.831602 systemd[1]: Started cri-containerd-8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb.scope - libcontainer container 8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb. Jul 10 05:40:40.837520 containerd[1580]: time="2025-07-10T05:40:40.837274253Z" level=info msg="connecting to shim 8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8" address="unix:///run/containerd/s/5ec1324901a68ef271ab2ba031c2323bbcbee2a2ceb3096eff2c8bebfe12fd20" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:40.849562 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:40.870639 systemd[1]: Started cri-containerd-8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8.scope - libcontainer container 8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8. Jul 10 05:40:40.885085 containerd[1580]: time="2025-07-10T05:40:40.885026585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rw944,Uid:7b0f0fc2-f807-4f80-a78f-2270dc4cd424,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb\"" Jul 10 05:40:40.890091 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:40.891032 containerd[1580]: time="2025-07-10T05:40:40.890991732Z" level=info msg="CreateContainer within sandbox \"8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 05:40:40.916028 containerd[1580]: time="2025-07-10T05:40:40.915981669Z" level=info msg="Container b632bb98babc0135961d4de8598666db49a6aa6ab9948934164a517280d641be: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:40.924758 containerd[1580]: time="2025-07-10T05:40:40.924670423Z" level=info msg="CreateContainer within sandbox \"8e34b6adc08a9de8d83e7f2d8bbd728cc10b2783638040e25e73a949dc6e97bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b632bb98babc0135961d4de8598666db49a6aa6ab9948934164a517280d641be\"" Jul 10 05:40:40.926104 containerd[1580]: time="2025-07-10T05:40:40.926076364Z" level=info msg="StartContainer for \"b632bb98babc0135961d4de8598666db49a6aa6ab9948934164a517280d641be\"" Jul 10 05:40:40.927996 containerd[1580]: time="2025-07-10T05:40:40.927933162Z" level=info msg="connecting to shim b632bb98babc0135961d4de8598666db49a6aa6ab9948934164a517280d641be" address="unix:///run/containerd/s/e4f188429989026f263bebbb073ba7d9a1e3e4ff817514833ba8214dbb454b8c" protocol=ttrpc version=3 Jul 10 05:40:40.943794 containerd[1580]: time="2025-07-10T05:40:40.943746210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4rvm9,Uid:bd991b85-9626-41bd-9812-76925fc7726a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8\"" Jul 10 05:40:40.960941 systemd[1]: Started cri-containerd-b632bb98babc0135961d4de8598666db49a6aa6ab9948934164a517280d641be.scope - libcontainer container b632bb98babc0135961d4de8598666db49a6aa6ab9948934164a517280d641be. Jul 10 05:40:40.999513 containerd[1580]: time="2025-07-10T05:40:40.999150426Z" level=info msg="StartContainer for \"b632bb98babc0135961d4de8598666db49a6aa6ab9948934164a517280d641be\" returns successfully" Jul 10 05:40:41.247225 containerd[1580]: time="2025-07-10T05:40:41.247164684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:41.247946 containerd[1580]: time="2025-07-10T05:40:41.247881280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 05:40:41.249065 containerd[1580]: time="2025-07-10T05:40:41.249022763Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:41.251227 containerd[1580]: time="2025-07-10T05:40:41.251187390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:41.251737 containerd[1580]: time="2025-07-10T05:40:41.251684624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.929326301s" Jul 10 05:40:41.251737 containerd[1580]: time="2025-07-10T05:40:41.251734608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 05:40:41.252649 containerd[1580]: time="2025-07-10T05:40:41.252618318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 05:40:41.253882 containerd[1580]: time="2025-07-10T05:40:41.253844972Z" level=info msg="CreateContainer within sandbox \"660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 05:40:41.262328 containerd[1580]: time="2025-07-10T05:40:41.262287612Z" level=info msg="Container 8f7cbe47fcfb2f3e6028a15fb3a7890af766674a07ad834f4fc8ceca98a3044f: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:41.274175 containerd[1580]: time="2025-07-10T05:40:41.274132682Z" level=info msg="CreateContainer within sandbox \"660051e1885c77b4846447f3053590cac3e794e3d2c9152d8ee8b6771ef4541c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8f7cbe47fcfb2f3e6028a15fb3a7890af766674a07ad834f4fc8ceca98a3044f\"" Jul 10 05:40:41.274751 containerd[1580]: time="2025-07-10T05:40:41.274689167Z" level=info msg="StartContainer for \"8f7cbe47fcfb2f3e6028a15fb3a7890af766674a07ad834f4fc8ceca98a3044f\"" Jul 10 05:40:41.276238 containerd[1580]: time="2025-07-10T05:40:41.276203933Z" level=info msg="connecting to shim 8f7cbe47fcfb2f3e6028a15fb3a7890af766674a07ad834f4fc8ceca98a3044f" address="unix:///run/containerd/s/848671b6cf20fabe8c1fcebaae1030b746d1bc14e2f55cf0c0ab81a48304759e" protocol=ttrpc version=3 Jul 10 05:40:41.279783 containerd[1580]: time="2025-07-10T05:40:41.279728493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2mz9,Uid:14665b78-b91b-4ca7-a8cb-a324d20217da,Namespace:kube-system,Attempt:0,}" Jul 10 05:40:41.292224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175268864.mount: Deactivated successfully. Jul 10 05:40:41.314627 systemd[1]: Started cri-containerd-8f7cbe47fcfb2f3e6028a15fb3a7890af766674a07ad834f4fc8ceca98a3044f.scope - libcontainer container 8f7cbe47fcfb2f3e6028a15fb3a7890af766674a07ad834f4fc8ceca98a3044f. Jul 10 05:40:41.424672 systemd-networkd[1480]: cali1c6d4e84b74: Gained IPv6LL Jul 10 05:40:41.427949 containerd[1580]: time="2025-07-10T05:40:41.427905559Z" level=info msg="StartContainer for \"8f7cbe47fcfb2f3e6028a15fb3a7890af766674a07ad834f4fc8ceca98a3044f\" returns successfully" Jul 10 05:40:41.437390 systemd-networkd[1480]: cali47e1c66176c: Link UP Jul 10 05:40:41.437647 systemd-networkd[1480]: cali47e1c66176c: Gained carrier Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.330 [INFO][4687] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0 coredns-668d6bf9bc- kube-system 14665b78-b91b-4ca7-a8cb-a324d20217da 799 0 2025-07-10 05:40:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-r2mz9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali47e1c66176c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.331 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.362 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" HandleID="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Workload="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.362 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" HandleID="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Workload="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000184ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-r2mz9", "timestamp":"2025-07-10 05:40:41.36258673 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.362 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.362 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.363 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.370 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.378 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.384 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.386 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.388 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.388 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.390 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596 Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.421 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.429 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.429 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" host="localhost" Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.429 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:41.452007 containerd[1580]: 2025-07-10 05:40:41.429 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" HandleID="k8s-pod-network.289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Workload="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" Jul 10 05:40:41.452614 containerd[1580]: 2025-07-10 05:40:41.434 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"14665b78-b91b-4ca7-a8cb-a324d20217da", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-r2mz9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47e1c66176c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:41.452614 containerd[1580]: 2025-07-10 05:40:41.434 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" Jul 10 05:40:41.452614 containerd[1580]: 2025-07-10 05:40:41.434 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47e1c66176c ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" Jul 10 05:40:41.452614 containerd[1580]: 2025-07-10 05:40:41.437 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" Jul 10 05:40:41.452614 containerd[1580]: 2025-07-10 05:40:41.438 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"14665b78-b91b-4ca7-a8cb-a324d20217da", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596", Pod:"coredns-668d6bf9bc-r2mz9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47e1c66176c", MAC:"e2:c8:03:da:27:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:41.452614 containerd[1580]: 2025-07-10 05:40:41.447 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2mz9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2mz9-eth0" Jul 10 05:40:41.482245 kubelet[2704]: I0710 05:40:41.482186 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-756b6ccd5d-xchpp" podStartSLOduration=1.6078131199999999 podStartE2EDuration="6.482161007s" podCreationTimestamp="2025-07-10 05:40:35 +0000 UTC" firstStartedPulling="2025-07-10 05:40:36.378103708 +0000 UTC m=+38.239919194" lastFinishedPulling="2025-07-10 05:40:41.252451595 +0000 UTC m=+43.114267081" observedRunningTime="2025-07-10 05:40:41.481781013 +0000 UTC m=+43.343596499" watchObservedRunningTime="2025-07-10 05:40:41.482161007 +0000 UTC m=+43.343976493" Jul 10 05:40:41.487689 systemd-networkd[1480]: caliac5392e17a4: Gained IPv6LL Jul 10 05:40:41.517566 kubelet[2704]: I0710 05:40:41.517322 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rw944" podStartSLOduration=38.517304812 podStartE2EDuration="38.517304812s" podCreationTimestamp="2025-07-10 05:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:40:41.51515783 +0000 UTC m=+43.376973316" watchObservedRunningTime="2025-07-10 05:40:41.517304812 +0000 UTC m=+43.379120299" Jul 10 05:40:41.537059 containerd[1580]: time="2025-07-10T05:40:41.536990222Z" level=info msg="connecting to shim 289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596" address="unix:///run/containerd/s/5b810122045deba221002ba324c5319474f2836bc00211d536ad32081722d637" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:41.566640 systemd[1]: Started cri-containerd-289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596.scope - libcontainer container 289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596. Jul 10 05:40:41.581259 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:41.613660 containerd[1580]: time="2025-07-10T05:40:41.613606863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2mz9,Uid:14665b78-b91b-4ca7-a8cb-a324d20217da,Namespace:kube-system,Attempt:0,} returns sandbox id \"289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596\"" Jul 10 05:40:41.616420 containerd[1580]: time="2025-07-10T05:40:41.616354364Z" level=info msg="CreateContainer within sandbox \"289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 05:40:41.625544 containerd[1580]: time="2025-07-10T05:40:41.625488572Z" level=info msg="Container 65e0227cf37c9d37fccf33681b9c54cefd0f6ff4ece4b8fc60c1f9bce5436e55: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:41.631620 containerd[1580]: time="2025-07-10T05:40:41.631567150Z" level=info msg="CreateContainer within sandbox \"289b32fb390106388330641d258ebe1a388019e023968e50dc8e6f27e7a23596\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65e0227cf37c9d37fccf33681b9c54cefd0f6ff4ece4b8fc60c1f9bce5436e55\"" Jul 10 05:40:41.632060 containerd[1580]: time="2025-07-10T05:40:41.632004843Z" level=info msg="StartContainer for \"65e0227cf37c9d37fccf33681b9c54cefd0f6ff4ece4b8fc60c1f9bce5436e55\"" Jul 10 05:40:41.632898 containerd[1580]: time="2025-07-10T05:40:41.632871811Z" level=info msg="connecting to shim 65e0227cf37c9d37fccf33681b9c54cefd0f6ff4ece4b8fc60c1f9bce5436e55" address="unix:///run/containerd/s/5b810122045deba221002ba324c5319474f2836bc00211d536ad32081722d637" protocol=ttrpc version=3 Jul 10 05:40:41.657608 systemd[1]: Started cri-containerd-65e0227cf37c9d37fccf33681b9c54cefd0f6ff4ece4b8fc60c1f9bce5436e55.scope - libcontainer container 65e0227cf37c9d37fccf33681b9c54cefd0f6ff4ece4b8fc60c1f9bce5436e55. Jul 10 05:40:41.690377 containerd[1580]: time="2025-07-10T05:40:41.690194601Z" level=info msg="StartContainer for \"65e0227cf37c9d37fccf33681b9c54cefd0f6ff4ece4b8fc60c1f9bce5436e55\" returns successfully" Jul 10 05:40:41.998723 systemd-networkd[1480]: calia6783cf7ea2: Gained IPv6LL Jul 10 05:40:42.279950 containerd[1580]: time="2025-07-10T05:40:42.279769814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-8wm6d,Uid:728fba3f-2ffd-498a-a284-400bc31893bf,Namespace:calico-apiserver,Attempt:0,}" Jul 10 05:40:42.279950 containerd[1580]: time="2025-07-10T05:40:42.279909416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-gzmm7,Uid:ff545102-cf9a-4e56-9635-710d478066a0,Namespace:calico-apiserver,Attempt:0,}" Jul 10 05:40:42.401388 systemd-networkd[1480]: cali09ef9a30f69: Link UP Jul 10 05:40:42.402448 systemd-networkd[1480]: cali09ef9a30f69: Gained carrier Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.330 [INFO][4842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0 calico-apiserver-7cdc6c9c96- calico-apiserver ff545102-cf9a-4e56-9635-710d478066a0 796 0 2025-07-10 05:40:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cdc6c9c96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cdc6c9c96-gzmm7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali09ef9a30f69 [] [] }} ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.330 [INFO][4842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.360 [INFO][4867] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" HandleID="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Workload="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.360 [INFO][4867] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" HandleID="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Workload="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e1450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cdc6c9c96-gzmm7", "timestamp":"2025-07-10 05:40:42.359987735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.360 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.360 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.360 [INFO][4867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.369 [INFO][4867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.373 [INFO][4867] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.377 [INFO][4867] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.378 [INFO][4867] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.380 [INFO][4867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.380 [INFO][4867] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.381 [INFO][4867] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.387 [INFO][4867] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.394 [INFO][4867] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.394 [INFO][4867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" host="localhost" Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.394 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:42.421249 containerd[1580]: 2025-07-10 05:40:42.394 [INFO][4867] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" HandleID="k8s-pod-network.5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Workload="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" Jul 10 05:40:42.422046 containerd[1580]: 2025-07-10 05:40:42.398 [INFO][4842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0", GenerateName:"calico-apiserver-7cdc6c9c96-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff545102-cf9a-4e56-9635-710d478066a0", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdc6c9c96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cdc6c9c96-gzmm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09ef9a30f69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:42.422046 containerd[1580]: 2025-07-10 05:40:42.398 [INFO][4842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" Jul 10 05:40:42.422046 containerd[1580]: 2025-07-10 05:40:42.398 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09ef9a30f69 ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" Jul 10 05:40:42.422046 containerd[1580]: 2025-07-10 05:40:42.404 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" Jul 10 05:40:42.422046 containerd[1580]: 2025-07-10 05:40:42.404 [INFO][4842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0", GenerateName:"calico-apiserver-7cdc6c9c96-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff545102-cf9a-4e56-9635-710d478066a0", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdc6c9c96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab", Pod:"calico-apiserver-7cdc6c9c96-gzmm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09ef9a30f69", MAC:"4e:80:e2:a6:57:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:42.422046 containerd[1580]: 2025-07-10 05:40:42.414 [INFO][4842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-gzmm7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--gzmm7-eth0" Jul 10 05:40:42.449665 containerd[1580]: time="2025-07-10T05:40:42.449608588Z" level=info msg="connecting to shim 5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab" address="unix:///run/containerd/s/2bfca89c2cd643d7d6b722c5e6d756de3ef2a506c487121a39e4c870cc926baa" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:42.483733 systemd[1]: Started cri-containerd-5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab.scope - libcontainer container 5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab. Jul 10 05:40:42.504006 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:42.507048 kubelet[2704]: I0710 05:40:42.506982 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r2mz9" podStartSLOduration=39.506961643 podStartE2EDuration="39.506961643s" podCreationTimestamp="2025-07-10 05:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:40:42.492805303 +0000 UTC m=+44.354620779" watchObservedRunningTime="2025-07-10 05:40:42.506961643 +0000 UTC m=+44.368777129" Jul 10 05:40:42.535385 systemd-networkd[1480]: cali6bb45c47866: Link UP Jul 10 05:40:42.537677 systemd-networkd[1480]: cali6bb45c47866: Gained carrier Jul 10 05:40:42.544026 containerd[1580]: time="2025-07-10T05:40:42.543975134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-gzmm7,Uid:ff545102-cf9a-4e56-9635-710d478066a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab\"" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.327 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0 calico-apiserver-7cdc6c9c96- calico-apiserver 728fba3f-2ffd-498a-a284-400bc31893bf 801 0 2025-07-10 05:40:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cdc6c9c96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cdc6c9c96-8wm6d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6bb45c47866 [] [] }} ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.327 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.368 [INFO][4861] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" HandleID="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Workload="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.368 [INFO][4861] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" HandleID="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Workload="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d61e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cdc6c9c96-8wm6d", "timestamp":"2025-07-10 05:40:42.368285651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.368 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.394 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.395 [INFO][4861] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.469 [INFO][4861] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.478 [INFO][4861] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.484 [INFO][4861] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.493 [INFO][4861] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.499 [INFO][4861] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.499 [INFO][4861] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.502 [INFO][4861] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.511 [INFO][4861] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.523 [INFO][4861] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.523 [INFO][4861] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" host="localhost" Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.523 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 05:40:42.558386 containerd[1580]: 2025-07-10 05:40:42.523 [INFO][4861] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" HandleID="k8s-pod-network.1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Workload="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" Jul 10 05:40:42.559115 containerd[1580]: 2025-07-10 05:40:42.526 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0", GenerateName:"calico-apiserver-7cdc6c9c96-", Namespace:"calico-apiserver", SelfLink:"", UID:"728fba3f-2ffd-498a-a284-400bc31893bf", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdc6c9c96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cdc6c9c96-8wm6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6bb45c47866", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:42.559115 containerd[1580]: 2025-07-10 05:40:42.527 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" Jul 10 05:40:42.559115 containerd[1580]: 2025-07-10 05:40:42.527 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bb45c47866 ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" Jul 10 05:40:42.559115 containerd[1580]: 2025-07-10 05:40:42.537 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" Jul 10 05:40:42.559115 containerd[1580]: 2025-07-10 05:40:42.538 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0", GenerateName:"calico-apiserver-7cdc6c9c96-", Namespace:"calico-apiserver", SelfLink:"", UID:"728fba3f-2ffd-498a-a284-400bc31893bf", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 5, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdc6c9c96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a", Pod:"calico-apiserver-7cdc6c9c96-8wm6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6bb45c47866", MAC:"2a:7b:06:39:04:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 05:40:42.559115 containerd[1580]: 2025-07-10 05:40:42.551 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" Namespace="calico-apiserver" Pod="calico-apiserver-7cdc6c9c96-8wm6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdc6c9c96--8wm6d-eth0" Jul 10 05:40:42.596421 containerd[1580]: time="2025-07-10T05:40:42.596358785Z" level=info msg="connecting to shim 1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a" address="unix:///run/containerd/s/69159c596f0cf2bd509b47adf1a06057fd3bb7b7042f71d91f78a81ef237c945" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:40:42.620605 systemd[1]: Started cri-containerd-1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a.scope - libcontainer container 1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a. Jul 10 05:40:42.634686 systemd-resolved[1421]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:40:42.638589 systemd-networkd[1480]: cali47e1c66176c: Gained IPv6LL Jul 10 05:40:42.664793 containerd[1580]: time="2025-07-10T05:40:42.664746315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdc6c9c96-8wm6d,Uid:728fba3f-2ffd-498a-a284-400bc31893bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a\"" Jul 10 05:40:42.760312 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). Jul 10 05:40:42.832077 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:40:42.834087 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:40:42.838777 systemd-logind[1547]: New session 9 of user core. Jul 10 05:40:42.847605 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 05:40:42.983923 sshd[4996]: Connection closed by 10.0.0.1 port 37392 Jul 10 05:40:42.984316 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:42.989092 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:37392.service: Deactivated successfully. Jul 10 05:40:42.991361 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 05:40:42.992191 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Jul 10 05:40:42.993287 systemd-logind[1547]: Removed session 9. Jul 10 05:40:43.137783 containerd[1580]: time="2025-07-10T05:40:43.137638010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:43.138514 containerd[1580]: time="2025-07-10T05:40:43.138440677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 05:40:43.139639 containerd[1580]: time="2025-07-10T05:40:43.139615673Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:43.141655 containerd[1580]: time="2025-07-10T05:40:43.141622283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:43.142171 containerd[1580]: time="2025-07-10T05:40:43.142129805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.889482102s" Jul 10 05:40:43.142171 containerd[1580]: time="2025-07-10T05:40:43.142167035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 05:40:43.143248 containerd[1580]: time="2025-07-10T05:40:43.143209834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 05:40:43.144213 containerd[1580]: time="2025-07-10T05:40:43.144187089Z" level=info msg="CreateContainer within sandbox \"f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 05:40:43.155432 containerd[1580]: time="2025-07-10T05:40:43.155399887Z" level=info msg="Container 3fc3192943044a0cb41b13b25bf27f88687c1790771c2dedbb5ae98817659685: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:43.169983 containerd[1580]: time="2025-07-10T05:40:43.169938072Z" level=info msg="CreateContainer within sandbox \"f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3fc3192943044a0cb41b13b25bf27f88687c1790771c2dedbb5ae98817659685\"" Jul 10 05:40:43.171157 containerd[1580]: time="2025-07-10T05:40:43.170459261Z" level=info msg="StartContainer for \"3fc3192943044a0cb41b13b25bf27f88687c1790771c2dedbb5ae98817659685\"" Jul 10 05:40:43.171873 containerd[1580]: time="2025-07-10T05:40:43.171850334Z" level=info msg="connecting to shim 3fc3192943044a0cb41b13b25bf27f88687c1790771c2dedbb5ae98817659685" address="unix:///run/containerd/s/056813f5daa0d0afc217e19e5ab2fbf87deababb8a1d5681f4c6641609313ceb" protocol=ttrpc version=3 Jul 10 05:40:43.199607 systemd[1]: Started cri-containerd-3fc3192943044a0cb41b13b25bf27f88687c1790771c2dedbb5ae98817659685.scope - libcontainer container 3fc3192943044a0cb41b13b25bf27f88687c1790771c2dedbb5ae98817659685. Jul 10 05:40:43.242852 containerd[1580]: time="2025-07-10T05:40:43.242808269Z" level=info msg="StartContainer for \"3fc3192943044a0cb41b13b25bf27f88687c1790771c2dedbb5ae98817659685\" returns successfully" Jul 10 05:40:43.854669 systemd-networkd[1480]: cali09ef9a30f69: Gained IPv6LL Jul 10 05:40:43.982665 systemd-networkd[1480]: cali6bb45c47866: Gained IPv6LL Jul 10 05:40:46.555866 containerd[1580]: time="2025-07-10T05:40:46.555805889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:46.556616 containerd[1580]: time="2025-07-10T05:40:46.556572329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 05:40:46.557869 containerd[1580]: time="2025-07-10T05:40:46.557818940Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:46.559796 containerd[1580]: time="2025-07-10T05:40:46.559758291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:46.560276 containerd[1580]: time="2025-07-10T05:40:46.560243041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.417006036s" Jul 10 05:40:46.560327 containerd[1580]: time="2025-07-10T05:40:46.560281583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 05:40:46.561385 containerd[1580]: time="2025-07-10T05:40:46.561338568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 05:40:46.571175 containerd[1580]: time="2025-07-10T05:40:46.571134882Z" level=info msg="CreateContainer within sandbox \"675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 05:40:46.579804 containerd[1580]: time="2025-07-10T05:40:46.579770265Z" level=info msg="Container 81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:46.588720 containerd[1580]: time="2025-07-10T05:40:46.588658985Z" level=info msg="CreateContainer within sandbox \"675bfa6a2f0e48a6465102cb531e47c8a999b2ae4b0a033951f2744702516711\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933\"" Jul 10 05:40:46.590748 containerd[1580]: time="2025-07-10T05:40:46.589241669Z" level=info msg="StartContainer for \"81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933\"" Jul 10 05:40:46.590748 containerd[1580]: time="2025-07-10T05:40:46.590285389Z" level=info msg="connecting to shim 81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933" address="unix:///run/containerd/s/726aad2ec77846c9c5084dcb46e21ee2f9aae18f7feed861a552fc90148de32c" protocol=ttrpc version=3 Jul 10 05:40:46.637681 systemd[1]: Started cri-containerd-81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933.scope - libcontainer container 81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933. Jul 10 05:40:46.686994 containerd[1580]: time="2025-07-10T05:40:46.686941852Z" level=info msg="StartContainer for \"81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933\" returns successfully" Jul 10 05:40:47.606299 kubelet[2704]: I0710 05:40:47.606207 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bbf7667d-575nv" podStartSLOduration=25.886682501 podStartE2EDuration="32.606174817s" podCreationTimestamp="2025-07-10 05:40:15 +0000 UTC" firstStartedPulling="2025-07-10 05:40:39.841583569 +0000 UTC m=+41.703399055" lastFinishedPulling="2025-07-10 05:40:46.561075884 +0000 UTC m=+48.422891371" observedRunningTime="2025-07-10 05:40:47.576345873 +0000 UTC m=+49.438161359" watchObservedRunningTime="2025-07-10 05:40:47.606174817 +0000 UTC m=+49.467990303" Jul 10 05:40:48.000869 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:37400.service - OpenSSH per-connection server daemon (10.0.0.1:37400). Jul 10 05:40:48.090530 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 37400 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:40:48.092427 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:40:48.098209 systemd-logind[1547]: New session 10 of user core. Jul 10 05:40:48.103776 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 05:40:48.254184 sshd[5104]: Connection closed by 10.0.0.1 port 37400 Jul 10 05:40:48.256028 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:48.267481 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:37400.service: Deactivated successfully. Jul 10 05:40:48.269546 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 05:40:48.270492 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Jul 10 05:40:48.272433 systemd-logind[1547]: Removed session 10. Jul 10 05:40:48.273870 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:37404.service - OpenSSH per-connection server daemon (10.0.0.1:37404). Jul 10 05:40:48.330280 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 37404 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:40:48.332049 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:40:48.339138 systemd-logind[1547]: New session 11 of user core. Jul 10 05:40:48.343624 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 05:40:48.507311 kubelet[2704]: I0710 05:40:48.507110 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 05:40:48.513691 sshd[5121]: Connection closed by 10.0.0.1 port 37404 Jul 10 05:40:48.514457 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:48.526843 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:37404.service: Deactivated successfully. Jul 10 05:40:48.532985 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 05:40:48.534886 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Jul 10 05:40:48.546350 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:37408.service - OpenSSH per-connection server daemon (10.0.0.1:37408). Jul 10 05:40:48.548005 systemd-logind[1547]: Removed session 11. Jul 10 05:40:48.605900 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 37408 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:40:48.608178 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:40:48.613583 systemd-logind[1547]: New session 12 of user core. Jul 10 05:40:48.624816 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 05:40:48.752961 sshd[5141]: Connection closed by 10.0.0.1 port 37408 Jul 10 05:40:48.755535 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:48.761299 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:37408.service: Deactivated successfully. Jul 10 05:40:48.764656 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 05:40:48.765782 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Jul 10 05:40:48.768354 systemd-logind[1547]: Removed session 12. Jul 10 05:40:49.877136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479322240.mount: Deactivated successfully. Jul 10 05:40:50.123643 kubelet[2704]: I0710 05:40:50.123586 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 05:40:50.189524 containerd[1580]: time="2025-07-10T05:40:50.189355500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933\" id:\"3f7c4cb9eba97a8e7a1ecb4a00d7915db9562c69674ecf7d837326f812d10ba6\" pid:5175 exited_at:{seconds:1752126050 nanos:179126249}" Jul 10 05:40:50.231791 containerd[1580]: time="2025-07-10T05:40:50.231736623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933\" id:\"b2f7755b27ff049dfce819bf0adc993164e42131f6a26d1ef7666cd1375d9463\" pid:5199 exited_at:{seconds:1752126050 nanos:231068870}" Jul 10 05:40:50.979893 containerd[1580]: time="2025-07-10T05:40:50.979817915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:50.980527 containerd[1580]: time="2025-07-10T05:40:50.980452867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 05:40:50.981710 containerd[1580]: time="2025-07-10T05:40:50.981673848Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:50.983710 containerd[1580]: time="2025-07-10T05:40:50.983670155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:50.984359 containerd[1580]: time="2025-07-10T05:40:50.984325525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.422953113s" Jul 10 05:40:50.984359 containerd[1580]: time="2025-07-10T05:40:50.984354290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 05:40:50.985443 containerd[1580]: time="2025-07-10T05:40:50.985368142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 05:40:50.986623 containerd[1580]: time="2025-07-10T05:40:50.986577803Z" level=info msg="CreateContainer within sandbox \"8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 05:40:50.995929 containerd[1580]: time="2025-07-10T05:40:50.995870207Z" level=info msg="Container 8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:51.003506 containerd[1580]: time="2025-07-10T05:40:51.003440326Z" level=info msg="CreateContainer within sandbox \"8ab81bac488aea10e4230f7442e305b7185b2d62f86c3b58ff22de87cea5afd8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00\"" Jul 10 05:40:51.005363 containerd[1580]: time="2025-07-10T05:40:51.003981641Z" level=info msg="StartContainer for \"8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00\"" Jul 10 05:40:51.005363 containerd[1580]: time="2025-07-10T05:40:51.005254952Z" level=info msg="connecting to shim 8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00" address="unix:///run/containerd/s/5ec1324901a68ef271ab2ba031c2323bbcbee2a2ceb3096eff2c8bebfe12fd20" protocol=ttrpc version=3 Jul 10 05:40:51.037759 systemd[1]: Started cri-containerd-8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00.scope - libcontainer container 8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00. Jul 10 05:40:51.125605 containerd[1580]: time="2025-07-10T05:40:51.125549042Z" level=info msg="StartContainer for \"8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00\" returns successfully" Jul 10 05:40:52.528262 kubelet[2704]: I0710 05:40:52.528229 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 05:40:53.769935 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:49036.service - OpenSSH per-connection server daemon (10.0.0.1:49036). Jul 10 05:40:53.836003 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 49036 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:40:53.842442 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:40:53.847529 systemd-logind[1547]: New session 13 of user core. Jul 10 05:40:53.856582 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 05:40:54.008899 sshd[5264]: Connection closed by 10.0.0.1 port 49036 Jul 10 05:40:54.009265 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:54.014416 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:49036.service: Deactivated successfully. Jul 10 05:40:54.016777 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 05:40:54.017681 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Jul 10 05:40:54.019164 systemd-logind[1547]: Removed session 13. Jul 10 05:40:54.303900 containerd[1580]: time="2025-07-10T05:40:54.303836273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:54.304890 containerd[1580]: time="2025-07-10T05:40:54.304838253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 05:40:54.306695 containerd[1580]: time="2025-07-10T05:40:54.306659291Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:54.309033 containerd[1580]: time="2025-07-10T05:40:54.308983944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:54.309454 containerd[1580]: time="2025-07-10T05:40:54.309421595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.324015902s" Jul 10 05:40:54.309454 containerd[1580]: time="2025-07-10T05:40:54.309451451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 05:40:54.310546 containerd[1580]: time="2025-07-10T05:40:54.310455846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 05:40:54.311820 containerd[1580]: time="2025-07-10T05:40:54.311796763Z" level=info msg="CreateContainer within sandbox \"5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 05:40:54.338526 kubelet[2704]: I0710 05:40:54.338479 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 05:40:54.348532 containerd[1580]: time="2025-07-10T05:40:54.347767589Z" level=info msg="Container fe22f8c604dd009127fd963cea16e649367359892d2b78492491c8c4ea5ad6c6: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:54.362310 containerd[1580]: time="2025-07-10T05:40:54.362268072Z" level=info msg="CreateContainer within sandbox \"5b9f5b28e12f6ea731a7cb578266e051d923e453c27edbf6dcdea31095366fab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe22f8c604dd009127fd963cea16e649367359892d2b78492491c8c4ea5ad6c6\"" Jul 10 05:40:54.363983 containerd[1580]: time="2025-07-10T05:40:54.363938587Z" level=info msg="StartContainer for \"fe22f8c604dd009127fd963cea16e649367359892d2b78492491c8c4ea5ad6c6\"" Jul 10 05:40:54.372675 containerd[1580]: time="2025-07-10T05:40:54.372616503Z" level=info msg="connecting to shim fe22f8c604dd009127fd963cea16e649367359892d2b78492491c8c4ea5ad6c6" address="unix:///run/containerd/s/2bfca89c2cd643d7d6b722c5e6d756de3ef2a506c487121a39e4c870cc926baa" protocol=ttrpc version=3 Jul 10 05:40:54.403610 systemd[1]: Started cri-containerd-fe22f8c604dd009127fd963cea16e649367359892d2b78492491c8c4ea5ad6c6.scope - libcontainer container fe22f8c604dd009127fd963cea16e649367359892d2b78492491c8c4ea5ad6c6. Jul 10 05:40:54.575881 containerd[1580]: time="2025-07-10T05:40:54.575734465Z" level=info msg="StartContainer for \"fe22f8c604dd009127fd963cea16e649367359892d2b78492491c8c4ea5ad6c6\" returns successfully" Jul 10 05:40:54.626083 containerd[1580]: time="2025-07-10T05:40:54.626027751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00\" id:\"b3836b8309a143fad7b054577b99f8f9d123ead889fe8a23bc6ee71b54ec78c3\" pid:5293 exit_status:1 exited_at:{seconds:1752126054 nanos:625581873}" Jul 10 05:40:54.717761 containerd[1580]: time="2025-07-10T05:40:54.716983747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00\" id:\"89f845973eb3fccc7a8e9184b96452ec79ecf44d0dcb81885de016c478741526\" pid:5355 exit_status:1 exited_at:{seconds:1752126054 nanos:716534815}" Jul 10 05:40:54.720385 containerd[1580]: time="2025-07-10T05:40:54.720340577Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:54.721824 containerd[1580]: time="2025-07-10T05:40:54.720922058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 05:40:54.722630 containerd[1580]: time="2025-07-10T05:40:54.722589929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 412.052119ms" Jul 10 05:40:54.722630 containerd[1580]: time="2025-07-10T05:40:54.722622800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 05:40:54.724221 containerd[1580]: time="2025-07-10T05:40:54.723647183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 05:40:54.726219 containerd[1580]: time="2025-07-10T05:40:54.726165109Z" level=info msg="CreateContainer within sandbox \"1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 05:40:54.739235 containerd[1580]: time="2025-07-10T05:40:54.739172629Z" level=info msg="Container a2081a5d1f340a51e6962c31a079720e72e267679ff9d6f2f409a5578f1ccf27: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:54.744830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281507063.mount: Deactivated successfully. Jul 10 05:40:54.750608 containerd[1580]: time="2025-07-10T05:40:54.750554970Z" level=info msg="CreateContainer within sandbox \"1f2d7b55d1663e853434fe16ef2a16c1b0e44e9d34ff64ff043f37b2ce42182a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a2081a5d1f340a51e6962c31a079720e72e267679ff9d6f2f409a5578f1ccf27\"" Jul 10 05:40:54.751423 containerd[1580]: time="2025-07-10T05:40:54.751358809Z" level=info msg="StartContainer for \"a2081a5d1f340a51e6962c31a079720e72e267679ff9d6f2f409a5578f1ccf27\"" Jul 10 05:40:54.753367 containerd[1580]: time="2025-07-10T05:40:54.753336190Z" level=info msg="connecting to shim a2081a5d1f340a51e6962c31a079720e72e267679ff9d6f2f409a5578f1ccf27" address="unix:///run/containerd/s/69159c596f0cf2bd509b47adf1a06057fd3bb7b7042f71d91f78a81ef237c945" protocol=ttrpc version=3 Jul 10 05:40:54.780783 systemd[1]: Started cri-containerd-a2081a5d1f340a51e6962c31a079720e72e267679ff9d6f2f409a5578f1ccf27.scope - libcontainer container a2081a5d1f340a51e6962c31a079720e72e267679ff9d6f2f409a5578f1ccf27. Jul 10 05:40:54.964995 containerd[1580]: time="2025-07-10T05:40:54.964841873Z" level=info msg="StartContainer for \"a2081a5d1f340a51e6962c31a079720e72e267679ff9d6f2f409a5578f1ccf27\" returns successfully" Jul 10 05:40:55.643647 kubelet[2704]: I0710 05:40:55.643581 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-8wm6d" podStartSLOduration=31.586616888000002 podStartE2EDuration="43.643562004s" podCreationTimestamp="2025-07-10 05:40:12 +0000 UTC" firstStartedPulling="2025-07-10 05:40:42.66641037 +0000 UTC m=+44.528225856" lastFinishedPulling="2025-07-10 05:40:54.723355486 +0000 UTC m=+56.585170972" observedRunningTime="2025-07-10 05:40:55.641731548 +0000 UTC m=+57.503547034" watchObservedRunningTime="2025-07-10 05:40:55.643562004 +0000 UTC m=+57.505377480" Jul 10 05:40:55.644881 kubelet[2704]: I0710 05:40:55.644329 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-4rvm9" podStartSLOduration=30.604631666 podStartE2EDuration="40.644321129s" podCreationTimestamp="2025-07-10 05:40:15 +0000 UTC" firstStartedPulling="2025-07-10 05:40:40.945498472 +0000 UTC m=+42.807313948" lastFinishedPulling="2025-07-10 05:40:50.985187904 +0000 UTC m=+52.847003411" observedRunningTime="2025-07-10 05:40:51.5325933 +0000 UTC m=+53.394408786" watchObservedRunningTime="2025-07-10 05:40:55.644321129 +0000 UTC m=+57.506136615" Jul 10 05:40:55.656997 kubelet[2704]: I0710 05:40:55.656779 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cdc6c9c96-gzmm7" podStartSLOduration=31.892817477 podStartE2EDuration="43.656765171s" podCreationTimestamp="2025-07-10 05:40:12 +0000 UTC" firstStartedPulling="2025-07-10 05:40:42.546315931 +0000 UTC m=+44.408131427" lastFinishedPulling="2025-07-10 05:40:54.310263615 +0000 UTC m=+56.172079121" observedRunningTime="2025-07-10 05:40:55.656112887 +0000 UTC m=+57.517928383" watchObservedRunningTime="2025-07-10 05:40:55.656765171 +0000 UTC m=+57.518580647" Jul 10 05:40:56.998072 containerd[1580]: time="2025-07-10T05:40:56.997990439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:56.998683 containerd[1580]: time="2025-07-10T05:40:56.998657010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 05:40:56.999720 containerd[1580]: time="2025-07-10T05:40:56.999684878Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:57.001779 containerd[1580]: time="2025-07-10T05:40:57.001736879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:40:57.002207 containerd[1580]: time="2025-07-10T05:40:57.002178738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.278495338s" Jul 10 05:40:57.002207 containerd[1580]: time="2025-07-10T05:40:57.002208304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 05:40:57.004566 containerd[1580]: time="2025-07-10T05:40:57.004512607Z" level=info msg="CreateContainer within sandbox \"f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 05:40:57.012451 containerd[1580]: time="2025-07-10T05:40:57.012398966Z" level=info msg="Container 4baa3ea581ae0b3abc0aa76e2e944fe3a3001bb3c798640ad0006b3f3228f03d: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:40:57.021849 containerd[1580]: time="2025-07-10T05:40:57.021806599Z" level=info msg="CreateContainer within sandbox \"f9bab718d8c2bfb949aabec6ed8ec00b69b72e7745a8df360deeaf4ee28f7796\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4baa3ea581ae0b3abc0aa76e2e944fe3a3001bb3c798640ad0006b3f3228f03d\"" Jul 10 05:40:57.022961 containerd[1580]: time="2025-07-10T05:40:57.022314742Z" level=info msg="StartContainer for \"4baa3ea581ae0b3abc0aa76e2e944fe3a3001bb3c798640ad0006b3f3228f03d\"" Jul 10 05:40:57.023775 containerd[1580]: time="2025-07-10T05:40:57.023729878Z" level=info msg="connecting to shim 4baa3ea581ae0b3abc0aa76e2e944fe3a3001bb3c798640ad0006b3f3228f03d" address="unix:///run/containerd/s/056813f5daa0d0afc217e19e5ab2fbf87deababb8a1d5681f4c6641609313ceb" protocol=ttrpc version=3 Jul 10 05:40:57.052713 systemd[1]: Started cri-containerd-4baa3ea581ae0b3abc0aa76e2e944fe3a3001bb3c798640ad0006b3f3228f03d.scope - libcontainer container 4baa3ea581ae0b3abc0aa76e2e944fe3a3001bb3c798640ad0006b3f3228f03d. Jul 10 05:40:57.097999 containerd[1580]: time="2025-07-10T05:40:57.097952264Z" level=info msg="StartContainer for \"4baa3ea581ae0b3abc0aa76e2e944fe3a3001bb3c798640ad0006b3f3228f03d\" returns successfully" Jul 10 05:40:57.357535 kubelet[2704]: I0710 05:40:57.357354 2704 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 05:40:57.358448 kubelet[2704]: I0710 05:40:57.358391 2704 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 05:40:57.628697 kubelet[2704]: I0710 05:40:57.628530 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rq8bf" podStartSLOduration=25.458977296 podStartE2EDuration="42.628508804s" podCreationTimestamp="2025-07-10 05:40:15 +0000 UTC" firstStartedPulling="2025-07-10 05:40:39.833528855 +0000 UTC m=+41.695344341" lastFinishedPulling="2025-07-10 05:40:57.003060373 +0000 UTC m=+58.864875849" observedRunningTime="2025-07-10 05:40:57.627656455 +0000 UTC m=+59.489472111" watchObservedRunningTime="2025-07-10 05:40:57.628508804 +0000 UTC m=+59.490324280" Jul 10 05:40:59.026627 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:49038.service - OpenSSH per-connection server daemon (10.0.0.1:49038). Jul 10 05:40:59.091686 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 49038 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:40:59.093232 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:40:59.098347 systemd-logind[1547]: New session 14 of user core. Jul 10 05:40:59.108610 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 05:40:59.232816 sshd[5462]: Connection closed by 10.0.0.1 port 49038 Jul 10 05:40:59.233172 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Jul 10 05:40:59.238381 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:49038.service: Deactivated successfully. Jul 10 05:40:59.240685 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 05:40:59.241433 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Jul 10 05:40:59.242762 systemd-logind[1547]: Removed session 14. Jul 10 05:41:04.247559 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:58354.service - OpenSSH per-connection server daemon (10.0.0.1:58354). Jul 10 05:41:04.317860 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 58354 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:04.319920 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:04.325613 systemd-logind[1547]: New session 15 of user core. Jul 10 05:41:04.335669 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 05:41:04.464392 sshd[5482]: Connection closed by 10.0.0.1 port 58354 Jul 10 05:41:04.464822 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:04.469047 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:58354.service: Deactivated successfully. Jul 10 05:41:04.471698 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 05:41:04.475154 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Jul 10 05:41:04.476254 systemd-logind[1547]: Removed session 15. Jul 10 05:41:08.472276 containerd[1580]: time="2025-07-10T05:41:08.472215740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00\" id:\"bd63865387476bf88d30e2c9ddaaaff3aadca96aaeb1e1573f989c3d3ee8621d\" pid:5510 exited_at:{seconds:1752126068 nanos:471724414}" Jul 10 05:41:09.482284 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:58368.service - OpenSSH per-connection server daemon (10.0.0.1:58368). Jul 10 05:41:09.566115 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 58368 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:09.568176 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:09.576416 systemd-logind[1547]: New session 16 of user core. Jul 10 05:41:09.582692 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 05:41:09.717632 sshd[5526]: Connection closed by 10.0.0.1 port 58368 Jul 10 05:41:09.718153 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:09.728202 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:58368.service: Deactivated successfully. Jul 10 05:41:09.730204 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 05:41:09.731522 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Jul 10 05:41:09.735575 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:57406.service - OpenSSH per-connection server daemon (10.0.0.1:57406). Jul 10 05:41:09.738694 systemd-logind[1547]: Removed session 16. Jul 10 05:41:09.790014 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 57406 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:09.791670 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:09.796689 systemd-logind[1547]: New session 17 of user core. Jul 10 05:41:09.807739 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 05:41:10.141016 sshd[5544]: Connection closed by 10.0.0.1 port 57406 Jul 10 05:41:10.144330 sshd-session[5540]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:10.155513 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:57406.service: Deactivated successfully. Jul 10 05:41:10.157610 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 05:41:10.158607 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Jul 10 05:41:10.161557 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:57422.service - OpenSSH per-connection server daemon (10.0.0.1:57422). Jul 10 05:41:10.162209 systemd-logind[1547]: Removed session 17. Jul 10 05:41:10.226041 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 57422 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:10.227351 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:10.232182 systemd-logind[1547]: New session 18 of user core. Jul 10 05:41:10.240608 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 05:41:10.520988 containerd[1580]: time="2025-07-10T05:41:10.520923850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9fb186aaa1b9e3af03e93c8a2cb2d36213fa5a05473d27db03e333982cea7bd\" id:\"06b81f70cde972734cf7cbfdf22bad0ec47c387b5ef9000263df42412649cbde\" pid:5578 exited_at:{seconds:1752126070 nanos:520559711}" Jul 10 05:41:11.182993 sshd[5558]: Connection closed by 10.0.0.1 port 57422 Jul 10 05:41:11.183747 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:11.196281 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:57422.service: Deactivated successfully. Jul 10 05:41:11.199642 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 05:41:11.205141 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Jul 10 05:41:11.209882 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:57426.service - OpenSSH per-connection server daemon (10.0.0.1:57426). Jul 10 05:41:11.212456 systemd-logind[1547]: Removed session 18. Jul 10 05:41:11.274193 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 57426 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:11.276182 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:11.284642 systemd-logind[1547]: New session 19 of user core. Jul 10 05:41:11.288677 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 05:41:11.569833 sshd[5606]: Connection closed by 10.0.0.1 port 57426 Jul 10 05:41:11.570711 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:11.582640 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:57426.service: Deactivated successfully. Jul 10 05:41:11.584804 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 05:41:11.586169 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Jul 10 05:41:11.589284 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:57436.service - OpenSSH per-connection server daemon (10.0.0.1:57436). Jul 10 05:41:11.590622 systemd-logind[1547]: Removed session 19. Jul 10 05:41:11.648366 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 57436 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:11.650635 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:11.656636 systemd-logind[1547]: New session 20 of user core. Jul 10 05:41:11.666759 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 05:41:11.793155 sshd[5620]: Connection closed by 10.0.0.1 port 57436 Jul 10 05:41:11.793619 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:11.798900 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:57436.service: Deactivated successfully. Jul 10 05:41:11.801239 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 05:41:11.802027 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Jul 10 05:41:11.803341 systemd-logind[1547]: Removed session 20. Jul 10 05:41:12.279814 kubelet[2704]: E0710 05:41:12.279744 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:41:16.807056 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:57442.service - OpenSSH per-connection server daemon (10.0.0.1:57442). Jul 10 05:41:16.862294 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 57442 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:16.864060 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:16.869047 systemd-logind[1547]: New session 21 of user core. Jul 10 05:41:16.880608 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 05:41:17.002394 sshd[5644]: Connection closed by 10.0.0.1 port 57442 Jul 10 05:41:17.002905 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:17.007289 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:57442.service: Deactivated successfully. Jul 10 05:41:17.009972 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 05:41:17.012740 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Jul 10 05:41:17.013862 systemd-logind[1547]: Removed session 21. Jul 10 05:41:20.269744 containerd[1580]: time="2025-07-10T05:41:20.269690896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933\" id:\"b406e870a7c50f8e1ebf01354671904ab257b7e3d6cf55e233abea997e78949c\" pid:5668 exited_at:{seconds:1752126080 nanos:269320168}" Jul 10 05:41:22.019996 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:42176.service - OpenSSH per-connection server daemon (10.0.0.1:42176). Jul 10 05:41:22.090121 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 42176 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:22.091987 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:22.096755 systemd-logind[1547]: New session 22 of user core. Jul 10 05:41:22.107610 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 05:41:22.275596 sshd[5682]: Connection closed by 10.0.0.1 port 42176 Jul 10 05:41:22.276737 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:22.283587 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:42176.service: Deactivated successfully. Jul 10 05:41:22.286947 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 05:41:22.288270 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Jul 10 05:41:22.290009 systemd-logind[1547]: Removed session 22. Jul 10 05:41:24.707322 containerd[1580]: time="2025-07-10T05:41:24.707263082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a6e4b47fbdc25f94b3b0b22c40575f7d2ee0d637372605c339003eb7afdab00\" id:\"9b0a13dc4324429a2bd18ec561d503556431b0ce48074ec36e6beb3171db8167\" pid:5709 exited_at:{seconds:1752126084 nanos:706941379}" Jul 10 05:41:27.292874 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:42182.service - OpenSSH per-connection server daemon (10.0.0.1:42182). Jul 10 05:41:27.349994 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 42182 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:27.351630 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:27.355936 systemd-logind[1547]: New session 23 of user core. Jul 10 05:41:27.366602 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 05:41:27.477849 sshd[5727]: Connection closed by 10.0.0.1 port 42182 Jul 10 05:41:27.478221 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:27.483197 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:42182.service: Deactivated successfully. Jul 10 05:41:27.489590 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 05:41:27.491896 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Jul 10 05:41:27.495273 systemd-logind[1547]: Removed session 23. Jul 10 05:41:29.279001 kubelet[2704]: E0710 05:41:29.278955 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:41:32.497706 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:46078.service - OpenSSH per-connection server daemon (10.0.0.1:46078). Jul 10 05:41:32.551261 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 46078 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:41:32.553252 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:41:32.557801 systemd-logind[1547]: New session 24 of user core. Jul 10 05:41:32.567608 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 05:41:32.689973 sshd[5744]: Connection closed by 10.0.0.1 port 46078 Jul 10 05:41:32.690710 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Jul 10 05:41:32.695384 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:46078.service: Deactivated successfully. Jul 10 05:41:32.697791 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 05:41:32.698582 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Jul 10 05:41:32.699831 systemd-logind[1547]: Removed session 24. Jul 10 05:41:33.280508 kubelet[2704]: E0710 05:41:33.279628 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:41:33.280508 kubelet[2704]: E0710 05:41:33.280115 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:41:33.726817 containerd[1580]: time="2025-07-10T05:41:33.726684023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81c48b10a0654edf37c8bdc7804957b05eea2f39056cdcf8e004786d468bf933\" id:\"9eb23307d7d423a80f6af0003983d9b0d7930d16c09ae432b366edcf7e66873a\" pid:5769 exited_at:{seconds:1752126093 nanos:726317335}"