Jul 7 06:01:40.262679 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:01:40.262720 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:01:40.262743 kernel: BIOS-provided physical RAM map: Jul 7 06:01:40.262752 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:01:40.262761 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 7 06:01:40.262769 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 7 06:01:40.262780 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 7 06:01:40.262789 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 7 06:01:40.262808 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 7 06:01:40.262828 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 7 06:01:40.262838 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 7 06:01:40.262847 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 7 06:01:40.262855 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 7 06:01:40.262865 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 7 06:01:40.262879 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 7 06:01:40.262889 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 7 06:01:40.262902 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 7 06:01:40.262912 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 7 06:01:40.262922 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 7 06:01:40.262931 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 7 06:01:40.262941 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 7 06:01:40.262950 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 7 06:01:40.262960 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:01:40.262970 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:01:40.262979 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 7 06:01:40.262992 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:01:40.263002 kernel: NX (Execute Disable) protection: active Jul 7 06:01:40.263012 kernel: APIC: Static calls initialized Jul 7 06:01:40.263022 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 7 06:01:40.263031 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 7 06:01:40.263040 kernel: extended physical RAM map: Jul 7 06:01:40.263050 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:01:40.263059 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 7 06:01:40.263068 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 7 06:01:40.263078 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 7 06:01:40.263087 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 7 06:01:40.263100 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 7 06:01:40.263110 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 7 06:01:40.263119 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 7 06:01:40.263128 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 7 06:01:40.263142 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 7 06:01:40.263151 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 7 06:01:40.263163 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 7 06:01:40.263174 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 7 06:01:40.263186 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 7 06:01:40.263197 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 7 06:01:40.263209 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 7 06:01:40.263221 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 7 06:01:40.263233 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 7 06:01:40.263244 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 7 06:01:40.263254 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 7 06:01:40.263268 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 7 06:01:40.263308 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 7 06:01:40.263319 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 7 06:01:40.263329 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:01:40.263339 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:01:40.263350 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 7 06:01:40.263360 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:01:40.263374 kernel: efi: EFI v2.7 by EDK II Jul 7 06:01:40.263384 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 7 06:01:40.263394 kernel: random: crng init done Jul 7 06:01:40.263407 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 7 06:01:40.263418 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 7 06:01:40.263435 kernel: secureboot: Secure boot disabled Jul 7 06:01:40.263445 kernel: SMBIOS 2.8 present. Jul 7 06:01:40.263455 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 7 06:01:40.263465 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:01:40.263475 kernel: Hypervisor detected: KVM Jul 7 06:01:40.263485 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:01:40.263495 kernel: kvm-clock: using sched offset of 5652877448 cycles Jul 7 06:01:40.263506 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:01:40.263517 kernel: tsc: Detected 2794.748 MHz processor Jul 7 06:01:40.263528 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:01:40.263541 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:01:40.263551 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 7 06:01:40.263562 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:01:40.263572 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:01:40.263583 kernel: Using GB pages for direct mapping Jul 7 06:01:40.263593 kernel: ACPI: Early table checksum verification disabled Jul 7 06:01:40.263604 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 7 06:01:40.263614 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:01:40.263625 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:01:40.263639 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:01:40.263649 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 7 06:01:40.263660 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:01:40.263670 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:01:40.263681 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:01:40.263692 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:01:40.263702 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 06:01:40.263712 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 7 06:01:40.263723 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 7 06:01:40.263736 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 7 06:01:40.263747 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 7 06:01:40.263757 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 7 06:01:40.263767 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 7 06:01:40.263778 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 7 06:01:40.263801 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 7 06:01:40.263813 kernel: No NUMA configuration found Jul 7 06:01:40.263833 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 7 06:01:40.263843 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 7 06:01:40.263858 kernel: Zone ranges: Jul 7 06:01:40.263869 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:01:40.263879 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 7 06:01:40.263890 kernel: Normal empty Jul 7 06:01:40.263900 kernel: Device empty Jul 7 06:01:40.263910 kernel: Movable zone start for each node Jul 7 06:01:40.263920 kernel: Early memory node ranges Jul 7 06:01:40.263931 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 06:01:40.263941 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 7 06:01:40.263955 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 7 06:01:40.263969 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 7 06:01:40.263979 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 7 06:01:40.263990 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 7 06:01:40.264000 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 7 06:01:40.264010 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 7 06:01:40.264021 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 7 06:01:40.264034 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:01:40.264045 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 06:01:40.264067 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 7 06:01:40.264078 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:01:40.264089 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 7 06:01:40.264102 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 7 06:01:40.264116 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 06:01:40.264126 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 7 06:01:40.264137 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 7 06:01:40.264148 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:01:40.264159 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:01:40.264173 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:01:40.264184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:01:40.264195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:01:40.264205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:01:40.264216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:01:40.264227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:01:40.264238 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:01:40.264249 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:01:40.264259 kernel: TSC deadline timer available Jul 7 06:01:40.264273 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:01:40.264422 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:01:40.264434 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:01:40.264444 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:01:40.264455 kernel: CPU topo: Num. cores per package: 4 Jul 7 06:01:40.264465 kernel: CPU topo: Num. threads per package: 4 Jul 7 06:01:40.264475 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 06:01:40.264486 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:01:40.264496 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:01:40.264510 kernel: kvm-guest: setup PV sched yield Jul 7 06:01:40.264521 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 7 06:01:40.264531 kernel: Booting paravirtualized kernel on KVM Jul 7 06:01:40.264541 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:01:40.264552 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 06:01:40.264562 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 06:01:40.264573 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 06:01:40.264583 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 06:01:40.264593 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:01:40.264608 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:01:40.264621 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:01:40.264636 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:01:40.264647 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:01:40.264658 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:01:40.264669 kernel: Fallback order for Node 0: 0 Jul 7 06:01:40.264680 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 7 06:01:40.264691 kernel: Policy zone: DMA32 Jul 7 06:01:40.264704 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:01:40.264715 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:01:40.264726 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:01:40.264737 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:01:40.264748 kernel: Dynamic Preempt: voluntary Jul 7 06:01:40.264759 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:01:40.264770 kernel: rcu: RCU event tracing is enabled. Jul 7 06:01:40.264781 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:01:40.264792 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:01:40.264806 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:01:40.264841 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:01:40.264853 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:01:40.264868 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:01:40.264879 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:01:40.264890 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:01:40.264907 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:01:40.264918 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 06:01:40.264929 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:01:40.264944 kernel: Console: colour dummy device 80x25 Jul 7 06:01:40.264955 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:01:40.264966 kernel: ACPI: Core revision 20240827 Jul 7 06:01:40.264977 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:01:40.264988 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:01:40.264999 kernel: x2apic enabled Jul 7 06:01:40.265010 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:01:40.265021 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:01:40.265032 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:01:40.265045 kernel: kvm-guest: setup PV IPIs Jul 7 06:01:40.265056 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:01:40.265068 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:01:40.265079 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 7 06:01:40.265090 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:01:40.265101 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:01:40.265112 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:01:40.265122 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:01:40.265134 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:01:40.265148 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:01:40.265159 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 06:01:40.265169 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 06:01:40.265180 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:01:40.265196 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:01:40.265207 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:01:40.265219 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:01:40.265230 kernel: x86/bugs: return thunk changed Jul 7 06:01:40.265240 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:01:40.265254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:01:40.265265 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:01:40.265276 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:01:40.265315 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:01:40.265326 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 06:01:40.265337 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:01:40.265348 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:01:40.265358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:01:40.265373 kernel: landlock: Up and running. Jul 7 06:01:40.265384 kernel: SELinux: Initializing. Jul 7 06:01:40.265395 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:01:40.265406 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:01:40.265417 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 06:01:40.265428 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:01:40.265439 kernel: ... version: 0 Jul 7 06:01:40.265449 kernel: ... bit width: 48 Jul 7 06:01:40.265460 kernel: ... generic registers: 6 Jul 7 06:01:40.265474 kernel: ... value mask: 0000ffffffffffff Jul 7 06:01:40.265484 kernel: ... max period: 00007fffffffffff Jul 7 06:01:40.265495 kernel: ... fixed-purpose events: 0 Jul 7 06:01:40.265505 kernel: ... event mask: 000000000000003f Jul 7 06:01:40.265515 kernel: signal: max sigframe size: 1776 Jul 7 06:01:40.265526 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:01:40.265537 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:01:40.265552 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:01:40.265563 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:01:40.265574 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:01:40.265588 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 06:01:40.265599 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:01:40.265610 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 7 06:01:40.265621 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 137196K reserved, 0K cma-reserved) Jul 7 06:01:40.265632 kernel: devtmpfs: initialized Jul 7 06:01:40.265643 kernel: x86/mm: Memory block size: 128MB Jul 7 06:01:40.265654 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 7 06:01:40.265665 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 7 06:01:40.265676 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 7 06:01:40.265691 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 7 06:01:40.265702 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 7 06:01:40.265713 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 7 06:01:40.265724 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:01:40.265735 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:01:40.265746 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:01:40.265757 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:01:40.265768 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:01:40.265782 kernel: audit: type=2000 audit(1751868096.398:1): state=initialized audit_enabled=0 res=1 Jul 7 06:01:40.265792 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:01:40.265803 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:01:40.265813 kernel: cpuidle: using governor menu Jul 7 06:01:40.265833 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:01:40.265843 kernel: dca service started, version 1.12.1 Jul 7 06:01:40.265854 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 06:01:40.265864 kernel: PCI: Using configuration type 1 for base access Jul 7 06:01:40.265875 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:01:40.265888 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:01:40.265899 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:01:40.265909 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:01:40.265921 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:01:40.265931 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:01:40.265942 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:01:40.265953 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:01:40.265963 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:01:40.265974 kernel: ACPI: Interpreter enabled Jul 7 06:01:40.265988 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 06:01:40.265999 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:01:40.266010 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:01:40.266021 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:01:40.266032 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:01:40.266043 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:01:40.266333 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:01:40.266502 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:01:40.266663 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:01:40.266679 kernel: PCI host bridge to bus 0000:00 Jul 7 06:01:40.266859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:01:40.267003 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:01:40.267152 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:01:40.267320 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 7 06:01:40.267466 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 7 06:01:40.267613 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:01:40.267753 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:01:40.267956 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:01:40.268139 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:01:40.268343 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 7 06:01:40.268509 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 7 06:01:40.268672 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 06:01:40.268842 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:01:40.269029 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 06:01:40.269192 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 7 06:01:40.269401 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 7 06:01:40.269562 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 7 06:01:40.269786 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:01:40.269971 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 7 06:01:40.270155 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 7 06:01:40.270359 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 7 06:01:40.270529 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:01:40.270674 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 7 06:01:40.270827 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 7 06:01:40.270966 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 7 06:01:40.271108 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 7 06:01:40.271270 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:01:40.271432 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:01:40.271588 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:01:40.271725 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 7 06:01:40.271872 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 7 06:01:40.272057 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:01:40.272228 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 7 06:01:40.272245 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:01:40.272257 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:01:40.272268 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:01:40.272279 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:01:40.272326 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:01:40.272337 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:01:40.272348 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:01:40.272365 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:01:40.272376 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:01:40.272387 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:01:40.272398 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:01:40.272409 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:01:40.272420 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:01:40.272431 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:01:40.272442 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:01:40.272453 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:01:40.272467 kernel: iommu: Default domain type: Translated Jul 7 06:01:40.272478 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:01:40.272489 kernel: efivars: Registered efivars operations Jul 7 06:01:40.272500 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:01:40.272510 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:01:40.272522 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 7 06:01:40.272532 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 7 06:01:40.272543 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 7 06:01:40.272554 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 7 06:01:40.272567 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 7 06:01:40.272578 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 7 06:01:40.272588 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 7 06:01:40.272599 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 7 06:01:40.272759 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:01:40.272930 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:01:40.273085 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:01:40.273104 kernel: vgaarb: loaded Jul 7 06:01:40.273124 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:01:40.273139 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:01:40.273153 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:01:40.273167 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:01:40.273181 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:01:40.273195 kernel: pnp: PnP ACPI init Jul 7 06:01:40.273476 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 7 06:01:40.273520 kernel: pnp: PnP ACPI: found 6 devices Jul 7 06:01:40.273537 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:01:40.273548 kernel: NET: Registered PF_INET protocol family Jul 7 06:01:40.273560 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:01:40.273572 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:01:40.273584 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:01:40.273596 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:01:40.273607 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:01:40.273618 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:01:40.273633 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:01:40.273644 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:01:40.273655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:01:40.273666 kernel: NET: Registered PF_XDP protocol family Jul 7 06:01:40.273847 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 7 06:01:40.274008 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 7 06:01:40.274209 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:01:40.274376 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:01:40.274522 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:01:40.274672 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 7 06:01:40.274812 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 7 06:01:40.274967 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:01:40.274983 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:01:40.274995 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:01:40.275007 kernel: Initialise system trusted keyrings Jul 7 06:01:40.275018 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:01:40.275030 kernel: Key type asymmetric registered Jul 7 06:01:40.275046 kernel: Asymmetric key parser 'x509' registered Jul 7 06:01:40.275057 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:01:40.275069 kernel: io scheduler mq-deadline registered Jul 7 06:01:40.275084 kernel: io scheduler kyber registered Jul 7 06:01:40.275096 kernel: io scheduler bfq registered Jul 7 06:01:40.275107 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:01:40.275123 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:01:40.275135 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:01:40.275147 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 06:01:40.275158 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:01:40.275170 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:01:40.275182 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:01:40.275194 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:01:40.275206 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:01:40.275405 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 06:01:40.275428 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:01:40.275582 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 06:01:40.275734 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T06:01:39 UTC (1751868099) Jul 7 06:01:40.275889 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 06:01:40.275905 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:01:40.275917 kernel: efifb: probing for efifb Jul 7 06:01:40.275929 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 7 06:01:40.275940 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 7 06:01:40.275957 kernel: efifb: scrolling: redraw Jul 7 06:01:40.275968 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:01:40.275980 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 06:01:40.275991 kernel: fb0: EFI VGA frame buffer device Jul 7 06:01:40.276003 kernel: pstore: Using crash dump compression: deflate Jul 7 06:01:40.276015 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:01:40.276026 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:01:40.276038 kernel: Segment Routing with IPv6 Jul 7 06:01:40.276049 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:01:40.276064 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:01:40.276076 kernel: Key type dns_resolver registered Jul 7 06:01:40.276087 kernel: IPI shorthand broadcast: enabled Jul 7 06:01:40.276098 kernel: sched_clock: Marking stable (4789005074, 288843011)->(5198658029, -120809944) Jul 7 06:01:40.276109 kernel: registered taskstats version 1 Jul 7 06:01:40.276121 kernel: Loading compiled-in X.509 certificates Jul 7 06:01:40.276132 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:01:40.276144 kernel: Demotion targets for Node 0: null Jul 7 06:01:40.276155 kernel: Key type .fscrypt registered Jul 7 06:01:40.276169 kernel: Key type fscrypt-provisioning registered Jul 7 06:01:40.276181 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:01:40.276193 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:01:40.276204 kernel: ima: No architecture policies found Jul 7 06:01:40.276215 kernel: clk: Disabling unused clocks Jul 7 06:01:40.276227 kernel: Warning: unable to open an initial console. Jul 7 06:01:40.276239 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:01:40.276251 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:01:40.276265 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:01:40.276277 kernel: Run /init as init process Jul 7 06:01:40.276310 kernel: with arguments: Jul 7 06:01:40.276321 kernel: /init Jul 7 06:01:40.276333 kernel: with environment: Jul 7 06:01:40.276343 kernel: HOME=/ Jul 7 06:01:40.276354 kernel: TERM=linux Jul 7 06:01:40.276366 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:01:40.276383 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:01:40.276404 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:01:40.276417 systemd[1]: Detected virtualization kvm. Jul 7 06:01:40.276429 systemd[1]: Detected architecture x86-64. Jul 7 06:01:40.276441 systemd[1]: Running in initrd. Jul 7 06:01:40.276453 systemd[1]: No hostname configured, using default hostname. Jul 7 06:01:40.276465 systemd[1]: Hostname set to . Jul 7 06:01:40.276477 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:01:40.276489 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:01:40.276505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:01:40.276517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:01:40.276531 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:01:40.276543 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:01:40.276556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:01:40.276570 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:01:40.276587 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:01:40.276599 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:01:40.276612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:01:40.276624 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:01:40.276636 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:01:40.276648 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:01:40.276660 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:01:40.276673 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:01:40.276685 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:01:40.276700 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:01:40.276712 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:01:40.276724 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:01:40.276737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:01:40.276749 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:01:40.276761 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:01:40.276773 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:01:40.276786 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:01:40.276801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:01:40.276827 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:01:40.276838 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:01:40.276849 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:01:40.276861 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:01:40.276874 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:01:40.276886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:01:40.276898 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:01:40.276916 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:01:40.276967 systemd-journald[221]: Collecting audit messages is disabled. Jul 7 06:01:40.277000 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:01:40.277012 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:01:40.277024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:01:40.277037 systemd-journald[221]: Journal started Jul 7 06:01:40.277065 systemd-journald[221]: Runtime Journal (/run/log/journal/e23bf62e9a3748b29bcb50968639db05) is 6M, max 48.5M, 42.4M free. Jul 7 06:01:40.260921 systemd-modules-load[222]: Inserted module 'overlay' Jul 7 06:01:40.314325 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:01:40.320196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:01:40.325689 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:01:40.325716 kernel: Bridge firewalling registered Jul 7 06:01:40.324342 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 7 06:01:40.325532 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:01:40.328023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:01:40.328601 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:01:40.332768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:01:40.334149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:01:40.350167 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:01:40.383201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:01:40.384461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:01:40.386245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:01:40.390393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:01:40.406731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:01:40.427477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:01:40.465162 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:01:40.469714 systemd-resolved[252]: Positive Trust Anchors: Jul 7 06:01:40.469738 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:01:40.469777 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:01:40.473530 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 7 06:01:40.481226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:01:40.483967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:01:40.608355 kernel: SCSI subsystem initialized Jul 7 06:01:40.619320 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:01:40.631321 kernel: iscsi: registered transport (tcp) Jul 7 06:01:40.655369 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:01:40.655505 kernel: QLogic iSCSI HBA Driver Jul 7 06:01:40.683154 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:01:40.703229 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:01:40.705383 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:01:40.778300 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:01:40.780699 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:01:40.881354 kernel: raid6: avx2x4 gen() 27522 MB/s Jul 7 06:01:40.925351 kernel: raid6: avx2x2 gen() 28330 MB/s Jul 7 06:01:40.979496 kernel: raid6: avx2x1 gen() 23447 MB/s Jul 7 06:01:40.979580 kernel: raid6: using algorithm avx2x2 gen() 28330 MB/s Jul 7 06:01:41.016656 kernel: raid6: .... xor() 16549 MB/s, rmw enabled Jul 7 06:01:41.016753 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:01:41.047358 kernel: xor: automatically using best checksumming function avx Jul 7 06:01:41.249350 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:01:41.259607 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:01:41.262965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:01:41.307577 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 7 06:01:41.315151 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:01:41.316815 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:01:41.352255 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jul 7 06:01:41.395084 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:01:41.398263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:01:41.495827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:01:41.498395 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:01:41.550332 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:01:41.555421 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:01:41.555464 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 06:01:41.561429 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:01:41.581949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:01:41.582276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:01:41.631684 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:01:41.631715 kernel: GPT:9289727 != 19775487 Jul 7 06:01:41.631728 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:01:41.631740 kernel: GPT:9289727 != 19775487 Jul 7 06:01:41.631751 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:01:41.631763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:01:41.631945 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:01:41.678051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:01:41.682197 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:01:41.687038 kernel: libata version 3.00 loaded. Jul 7 06:01:41.687077 kernel: AES CTR mode by8 optimization enabled Jul 7 06:01:41.693507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:01:41.694553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:01:41.723878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:01:41.795551 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:01:41.795825 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:01:41.799651 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:01:41.799872 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:01:41.800033 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:01:41.819890 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:01:41.833469 kernel: scsi host0: ahci Jul 7 06:01:41.833727 kernel: scsi host1: ahci Jul 7 06:01:41.833927 kernel: scsi host2: ahci Jul 7 06:01:41.835347 kernel: scsi host3: ahci Jul 7 06:01:41.836476 kernel: scsi host4: ahci Jul 7 06:01:41.837317 kernel: scsi host5: ahci Jul 7 06:01:41.837645 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 7 06:01:41.839554 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 7 06:01:41.839582 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 7 06:01:41.840550 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 7 06:01:41.841551 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 7 06:01:41.843502 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 7 06:01:41.846219 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:01:41.857047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:01:41.873623 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:01:41.874190 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:01:41.884532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:01:41.888332 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:01:41.925392 disk-uuid[638]: Primary Header is updated. Jul 7 06:01:41.925392 disk-uuid[638]: Secondary Entries is updated. Jul 7 06:01:41.925392 disk-uuid[638]: Secondary Header is updated. Jul 7 06:01:41.930326 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:01:41.936391 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:01:42.151335 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:01:42.151430 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 06:01:42.152311 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:01:42.153335 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 06:01:42.154423 kernel: ata3.00: applying bridge limits Jul 7 06:01:42.155315 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:01:42.155353 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:01:42.156327 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:01:42.157330 kernel: ata3.00: configured for UDMA/100 Jul 7 06:01:42.158317 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 06:01:42.221336 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 06:01:42.221703 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:01:42.247606 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 06:01:42.625567 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:01:42.628859 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:01:42.631269 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:01:42.633529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:01:42.636945 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:01:42.671968 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:01:42.953362 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:01:42.953470 disk-uuid[639]: The operation has completed successfully. Jul 7 06:01:42.993262 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:01:42.993457 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:01:43.025809 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:01:43.056324 sh[668]: Success Jul 7 06:01:43.079390 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:01:43.079470 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:01:43.080763 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:01:43.090317 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:01:43.127005 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:01:43.130384 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:01:43.145680 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:01:43.156474 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:01:43.156570 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (680) Jul 7 06:01:43.158098 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:01:43.158135 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:01:43.159139 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:01:43.166361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:01:43.168646 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:01:43.170927 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:01:43.173602 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:01:43.176174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:01:43.238307 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Jul 7 06:01:43.240537 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:01:43.240567 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:01:43.240580 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:01:43.255352 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:01:43.256565 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:01:43.259825 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:01:43.375083 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:01:43.395851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:01:43.443681 ignition[766]: Ignition 2.21.0 Jul 7 06:01:43.443700 ignition[766]: Stage: fetch-offline Jul 7 06:01:43.443763 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:01:43.443775 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:01:43.443892 ignition[766]: parsed url from cmdline: "" Jul 7 06:01:43.443897 ignition[766]: no config URL provided Jul 7 06:01:43.443903 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:01:43.443914 ignition[766]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:01:43.443948 ignition[766]: op(1): [started] loading QEMU firmware config module Jul 7 06:01:43.443955 ignition[766]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:01:43.455767 ignition[766]: op(1): [finished] loading QEMU firmware config module Jul 7 06:01:43.459231 systemd-networkd[855]: lo: Link UP Jul 7 06:01:43.459244 systemd-networkd[855]: lo: Gained carrier Jul 7 06:01:43.461081 systemd-networkd[855]: Enumeration completed Jul 7 06:01:43.461404 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:01:43.462665 systemd[1]: Reached target network.target - Network. Jul 7 06:01:43.463364 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:01:43.463368 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:01:43.463967 systemd-networkd[855]: eth0: Link UP Jul 7 06:01:43.463972 systemd-networkd[855]: eth0: Gained carrier Jul 7 06:01:43.463983 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:01:43.488369 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:01:43.511308 ignition[766]: parsing config with SHA512: 423c64bfac4cc97572fb987f95edde6dd65d4c4e75126cc290c37be9e64ed1c2ee09e90f444060fa8ca2fd0e5d8c99e9ba90d20caedf76a3b84f4b38c7beb472 Jul 7 06:01:43.515732 unknown[766]: fetched base config from "system" Jul 7 06:01:43.515758 unknown[766]: fetched user config from "qemu" Jul 7 06:01:43.516131 ignition[766]: fetch-offline: fetch-offline passed Jul 7 06:01:43.516219 ignition[766]: Ignition finished successfully Jul 7 06:01:43.523917 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:01:43.526747 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:01:43.529268 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:01:43.582758 ignition[863]: Ignition 2.21.0 Jul 7 06:01:43.582773 ignition[863]: Stage: kargs Jul 7 06:01:43.582901 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:01:43.582911 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:01:43.583618 ignition[863]: kargs: kargs passed Jul 7 06:01:43.583666 ignition[863]: Ignition finished successfully Jul 7 06:01:43.592187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:01:43.594674 systemd-resolved[252]: Detected conflict on linux IN A 10.0.0.25 Jul 7 06:01:43.594690 systemd-resolved[252]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 7 06:01:43.597555 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:01:43.649054 ignition[872]: Ignition 2.21.0 Jul 7 06:01:43.649068 ignition[872]: Stage: disks Jul 7 06:01:43.649257 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:01:43.649268 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:01:43.652581 ignition[872]: disks: disks passed Jul 7 06:01:43.652660 ignition[872]: Ignition finished successfully Jul 7 06:01:43.702267 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:01:43.702776 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:01:43.704581 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:01:43.704920 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:01:43.705234 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:01:43.711224 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:01:43.713030 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:01:43.749890 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:01:43.783956 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:01:43.785827 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:01:43.898312 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:01:43.898797 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:01:43.899960 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:01:43.903159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:01:43.904749 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:01:43.906001 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:01:43.906060 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:01:43.906090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:01:43.925907 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:01:43.929968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:01:43.935316 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Jul 7 06:01:43.937657 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:01:43.937681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:01:43.937692 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:01:43.943853 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:01:43.974838 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:01:43.980459 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:01:43.985696 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:01:43.990225 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:01:44.092580 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:01:44.094520 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:01:44.096187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:01:44.137358 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:01:44.161705 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:01:44.165482 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:01:44.187251 ignition[1004]: INFO : Ignition 2.21.0 Jul 7 06:01:44.187251 ignition[1004]: INFO : Stage: mount Jul 7 06:01:44.189471 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:01:44.189471 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:01:44.193302 ignition[1004]: INFO : mount: mount passed Jul 7 06:01:44.193302 ignition[1004]: INFO : Ignition finished successfully Jul 7 06:01:44.197393 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:01:44.202113 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:01:44.239604 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:01:44.284892 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1016) Jul 7 06:01:44.284953 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:01:44.284964 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:01:44.286355 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:01:44.290752 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:01:44.341108 ignition[1033]: INFO : Ignition 2.21.0 Jul 7 06:01:44.341108 ignition[1033]: INFO : Stage: files Jul 7 06:01:44.343478 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:01:44.343478 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:01:44.347762 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:01:44.349652 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:01:44.349652 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:01:44.353430 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:01:44.353430 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:01:44.353430 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:01:44.352079 unknown[1033]: wrote ssh authorized keys file for user: core Jul 7 06:01:44.359983 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:01:44.359983 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 06:01:44.398375 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:01:44.616538 systemd-networkd[855]: eth0: Gained IPv6LL Jul 7 06:01:44.621699 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:01:44.624053 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:01:44.640101 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:01:44.640101 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:01:44.640101 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:01:44.640101 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:01:44.640101 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:01:44.640101 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 06:01:45.205446 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:01:45.962219 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:01:45.962219 ignition[1033]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:01:45.966901 ignition[1033]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:01:45.970295 ignition[1033]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:01:45.970295 ignition[1033]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:01:45.970295 ignition[1033]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:01:45.975839 ignition[1033]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:01:45.975839 ignition[1033]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:01:45.975839 ignition[1033]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:01:45.975839 ignition[1033]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:01:45.993607 ignition[1033]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:01:46.000258 ignition[1033]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:01:46.002041 ignition[1033]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:01:46.002041 ignition[1033]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:01:46.002041 ignition[1033]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:01:46.002041 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:01:46.002041 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:01:46.002041 ignition[1033]: INFO : files: files passed Jul 7 06:01:46.002041 ignition[1033]: INFO : Ignition finished successfully Jul 7 06:01:46.006359 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:01:46.011122 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:01:46.013725 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:01:46.027582 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:01:46.027944 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:01:46.030706 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:01:46.033197 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:01:46.033197 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:01:46.036479 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:01:46.039608 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:01:46.040245 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:01:46.041438 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:01:46.107786 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:01:46.107951 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:01:46.109301 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:01:46.111814 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:01:46.112165 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:01:46.115302 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:01:46.135448 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:01:46.140064 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:01:46.175382 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:01:46.175848 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:01:46.176376 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:01:46.176883 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:01:46.177029 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:01:46.182228 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:01:46.182783 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:01:46.183148 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:01:46.183710 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:01:46.184075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:01:46.184617 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:01:46.185006 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:01:46.185392 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:01:46.185945 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:01:46.186333 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:01:46.186878 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:01:46.187219 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:01:46.187417 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:01:46.188206 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:01:46.188790 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:01:46.189108 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:01:46.189234 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:01:46.214206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:01:46.214474 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:01:46.219035 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:01:46.219198 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:01:46.219669 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:01:46.222955 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:01:46.230449 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:01:46.231001 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:01:46.233689 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:01:46.235359 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:01:46.235474 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:01:46.235866 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:01:46.235980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:01:46.238751 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:01:46.238897 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:01:46.240820 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:01:46.240955 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:01:46.244076 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:01:46.244827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:01:46.244978 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:01:46.247999 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:01:46.257022 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:01:46.257182 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:01:46.259039 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:01:46.259145 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:01:46.265573 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:01:46.266880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:01:46.288115 ignition[1088]: INFO : Ignition 2.21.0 Jul 7 06:01:46.288115 ignition[1088]: INFO : Stage: umount Jul 7 06:01:46.290795 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:01:46.290795 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:01:46.295475 ignition[1088]: INFO : umount: umount passed Jul 7 06:01:46.296539 ignition[1088]: INFO : Ignition finished successfully Jul 7 06:01:46.299049 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:01:46.301000 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:01:46.301169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:01:46.302750 systemd[1]: Stopped target network.target - Network. Jul 7 06:01:46.304605 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:01:46.304664 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:01:46.306228 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:01:46.306305 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:01:46.307366 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:01:46.307444 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:01:46.311751 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:01:46.311839 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:01:46.314532 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:01:46.314891 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:01:46.328488 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:01:46.328648 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:01:46.333549 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:01:46.333914 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:01:46.333976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:01:46.340320 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:01:46.340680 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:01:46.340862 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:01:46.346139 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:01:46.346822 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:01:46.347349 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:01:46.347427 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:01:46.349175 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:01:46.353862 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:01:46.353971 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:01:46.354816 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:01:46.354881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:01:46.361049 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:01:46.361112 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:01:46.361758 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:01:46.364010 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:01:46.379541 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:01:46.390617 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:01:46.395161 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:01:46.395321 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:01:46.396184 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:01:46.396243 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:01:46.399091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:01:46.399132 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:01:46.399848 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:01:46.399908 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:01:46.400783 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:01:46.400837 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:01:46.408917 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:01:46.408998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:01:46.411148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:01:46.413882 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:01:46.413947 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:01:46.418168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:01:46.418242 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:01:46.422302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:01:46.422370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:01:46.444952 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:01:46.445187 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:01:46.866706 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:01:46.866886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:01:46.868708 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:01:46.869159 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:01:46.869235 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:01:46.870975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:01:46.899952 systemd[1]: Switching root. Jul 7 06:01:46.933928 systemd-journald[221]: Journal stopped Jul 7 06:01:48.885067 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Jul 7 06:01:48.885137 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:01:48.885152 kernel: SELinux: policy capability open_perms=1 Jul 7 06:01:48.885163 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:01:48.885175 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:01:48.885191 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:01:48.885205 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:01:48.885223 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:01:48.885234 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:01:48.885246 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:01:48.885257 kernel: audit: type=1403 audit(1751868107.929:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:01:48.885275 systemd[1]: Successfully loaded SELinux policy in 52.195ms. Jul 7 06:01:48.885316 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.326ms. Jul 7 06:01:48.885330 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:01:48.885343 systemd[1]: Detected virtualization kvm. Jul 7 06:01:48.885359 systemd[1]: Detected architecture x86-64. Jul 7 06:01:48.885374 systemd[1]: Detected first boot. Jul 7 06:01:48.885390 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:01:48.885406 zram_generator::config[1135]: No configuration found. Jul 7 06:01:48.885419 kernel: Guest personality initialized and is inactive Jul 7 06:01:48.885545 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:01:48.885568 kernel: Initialized host personality Jul 7 06:01:48.885579 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:01:48.885595 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:01:48.885619 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:01:48.885642 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:01:48.885662 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:01:48.885677 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:01:48.885697 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:01:48.885717 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:01:48.885729 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:01:48.885745 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:01:48.885758 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:01:48.885781 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:01:48.885793 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:01:48.885808 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:01:48.885821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:01:48.885837 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:01:48.885852 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:01:48.885865 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:01:48.885881 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:01:48.885975 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:01:48.885991 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:01:48.886003 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:01:48.886015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:01:48.886027 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:01:48.886039 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:01:48.886051 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:01:48.886063 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:01:48.886078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:01:48.886093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:01:48.886108 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:01:48.886120 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:01:48.886136 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:01:48.886148 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:01:48.886163 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:01:48.886176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:01:48.886199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:01:48.886211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:01:48.886230 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:01:48.886242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:01:48.886258 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:01:48.886277 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:01:48.886663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:01:48.886681 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:01:48.886697 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:01:48.886709 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:01:48.886722 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:01:48.886739 systemd[1]: Reached target machines.target - Containers. Jul 7 06:01:48.886751 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:01:48.886764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:01:48.886776 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:01:48.886789 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:01:48.886806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:01:48.886818 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:01:48.886830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:01:48.886845 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:01:48.886857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:01:48.886870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:01:48.886882 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:01:48.886894 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:01:48.886906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:01:48.886918 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:01:48.886931 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:01:48.886973 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:01:48.886985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:01:48.886997 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:01:48.887009 kernel: fuse: init (API version 7.41) Jul 7 06:01:48.887020 kernel: loop: module loaded Jul 7 06:01:48.887033 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:01:48.887045 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:01:48.887081 systemd-journald[1208]: Collecting audit messages is disabled. Jul 7 06:01:48.887113 systemd-journald[1208]: Journal started Jul 7 06:01:48.887136 systemd-journald[1208]: Runtime Journal (/run/log/journal/e23bf62e9a3748b29bcb50968639db05) is 6M, max 48.5M, 42.4M free. Jul 7 06:01:48.616737 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:01:48.638122 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:01:48.638856 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:01:48.890153 kernel: ACPI: bus type drm_connector registered Jul 7 06:01:48.890190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:01:48.893565 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:01:48.893603 systemd[1]: Stopped verity-setup.service. Jul 7 06:01:48.896857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:01:48.903336 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:01:48.904876 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:01:48.906446 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:01:48.908132 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:01:48.909504 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:01:48.911037 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:01:48.912481 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:01:48.914129 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:01:48.915931 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:01:48.917925 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:01:48.918213 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:01:48.919976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:01:48.920276 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:01:48.921981 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:01:48.922267 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:01:48.923915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:01:48.924563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:01:48.926421 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:01:48.926721 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:01:48.928486 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:01:48.928879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:01:48.930693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:01:48.933008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:01:48.934910 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:01:48.936745 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:01:48.953643 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:01:48.959443 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:01:48.961842 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:01:48.963125 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:01:48.963230 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:01:48.965390 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:01:48.975692 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:01:48.977098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:01:48.979892 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:01:48.985101 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:01:48.986768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:01:48.989364 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:01:48.991544 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:01:48.993983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:01:48.997245 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:01:49.008521 systemd-journald[1208]: Time spent on flushing to /var/log/journal/e23bf62e9a3748b29bcb50968639db05 is 37.758ms for 1063 entries. Jul 7 06:01:49.008521 systemd-journald[1208]: System Journal (/var/log/journal/e23bf62e9a3748b29bcb50968639db05) is 8M, max 195.6M, 187.6M free. Jul 7 06:01:49.111855 systemd-journald[1208]: Received client request to flush runtime journal. Jul 7 06:01:49.112183 kernel: loop0: detected capacity change from 0 to 229808 Jul 7 06:01:49.008639 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:01:49.016003 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:01:49.017874 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:01:49.019335 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:01:49.051108 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:01:49.053442 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:01:49.057472 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:01:49.070819 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:01:49.114449 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:01:49.115319 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:01:49.122306 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:01:49.131834 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:01:49.135949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:01:49.141335 kernel: loop1: detected capacity change from 0 to 113872 Jul 7 06:01:49.170330 kernel: loop2: detected capacity change from 0 to 146240 Jul 7 06:01:49.180543 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 7 06:01:49.181591 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 7 06:01:49.218169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:01:49.252320 kernel: loop3: detected capacity change from 0 to 229808 Jul 7 06:01:49.265342 kernel: loop4: detected capacity change from 0 to 113872 Jul 7 06:01:49.283464 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 06:01:49.329691 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:01:49.330625 (sd-merge)[1277]: Merged extensions into '/usr'. Jul 7 06:01:49.339767 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:01:49.339799 systemd[1]: Reloading... Jul 7 06:01:49.407354 zram_generator::config[1300]: No configuration found. Jul 7 06:01:49.553826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:01:49.661310 ldconfig[1249]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:01:49.677961 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:01:49.678322 systemd[1]: Reloading finished in 337 ms. Jul 7 06:01:49.698351 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:01:49.700046 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:01:49.719941 systemd[1]: Starting ensure-sysext.service... Jul 7 06:01:49.721985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:01:49.741177 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:01:49.741197 systemd[1]: Reloading... Jul 7 06:01:49.756087 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:01:49.756130 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:01:49.756655 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:01:49.756923 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:01:49.758364 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:01:49.758729 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jul 7 06:01:49.758859 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jul 7 06:01:49.764727 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:01:49.764869 systemd-tmpfiles[1341]: Skipping /boot Jul 7 06:01:49.789448 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:01:49.789621 systemd-tmpfiles[1341]: Skipping /boot Jul 7 06:01:49.853335 zram_generator::config[1371]: No configuration found. Jul 7 06:01:49.973144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:01:50.075866 systemd[1]: Reloading finished in 334 ms. Jul 7 06:01:50.107415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:01:50.115972 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:01:50.120183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:01:50.123447 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:01:50.136321 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:01:50.143066 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:01:50.145842 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:01:50.156414 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:01:50.156617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:01:50.159511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:01:50.162505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:01:50.165537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:01:50.167507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:01:50.167691 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:01:50.174915 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:01:50.179318 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:01:50.181379 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:01:50.184440 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:01:50.186971 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:01:50.187997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:01:50.190076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:01:50.190729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:01:50.193168 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:01:50.193960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:01:50.198759 augenrules[1436]: No rules Jul 7 06:01:50.202066 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:01:50.202728 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:01:50.212166 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:01:50.222025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:01:50.224163 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:01:50.226530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:01:50.230523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:01:50.237856 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:01:50.238253 systemd-udevd[1431]: Using default interface naming scheme 'v255'. Jul 7 06:01:50.242570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:01:50.255032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:01:50.256763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:01:50.257173 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:01:50.260279 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:01:50.262159 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:01:50.266738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:01:50.267068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:01:50.269527 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:01:50.270417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:01:50.272856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:01:50.280817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:01:50.283275 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:01:50.284067 augenrules[1447]: /sbin/augenrules: No change Jul 7 06:01:50.286220 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:01:50.287061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:01:50.289259 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:01:50.294900 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:01:50.301632 augenrules[1475]: No rules Jul 7 06:01:50.303336 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:01:50.303869 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:01:50.314427 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:01:50.316612 systemd[1]: Finished ensure-sysext.service. Jul 7 06:01:50.328328 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:01:50.331209 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:01:50.331352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:01:50.336719 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:01:50.338363 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:01:50.422774 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:01:50.513681 systemd-resolved[1415]: Positive Trust Anchors: Jul 7 06:01:50.513715 systemd-resolved[1415]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:01:50.513770 systemd-resolved[1415]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:01:50.523107 systemd-resolved[1415]: Defaulting to hostname 'linux'. Jul 7 06:01:50.533339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:01:50.535121 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:01:50.537438 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:01:50.540634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:01:50.595069 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:01:50.626525 systemd-networkd[1503]: lo: Link UP Jul 7 06:01:50.626983 systemd-networkd[1503]: lo: Gained carrier Jul 7 06:01:50.632332 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:01:50.633738 systemd-networkd[1503]: Enumeration completed Jul 7 06:01:50.633947 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:01:50.635362 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:01:50.635376 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:01:50.636735 systemd-networkd[1503]: eth0: Link UP Jul 7 06:01:50.636950 systemd[1]: Reached target network.target - Network. Jul 7 06:01:50.637023 systemd-networkd[1503]: eth0: Gained carrier Jul 7 06:01:50.637042 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:01:50.641315 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:01:50.644930 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:01:50.648409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:01:50.650347 systemd-networkd[1503]: eth0: DHCPv4 address 10.0.0.25/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:01:50.652344 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:01:50.653039 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:01:50.653348 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Jul 7 06:01:50.655321 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:01:50.656327 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:01:52.138299 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:01:52.138372 systemd-timesyncd[1510]: Initial clock synchronization to Mon 2025-07-07 06:01:52.138182 UTC. Jul 7 06:01:52.138461 systemd-resolved[1415]: Clock change detected. Flushing caches. Jul 7 06:01:52.139192 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:01:52.140625 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:01:52.142093 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:01:52.159148 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:01:52.159470 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:01:52.160662 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:01:52.162164 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:01:52.165016 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:01:52.166706 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:01:52.172546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:01:52.176689 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:01:52.195356 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:01:52.197276 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:01:52.199889 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:01:52.216172 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:01:52.223135 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 7 06:01:52.223582 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:01:52.225829 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:01:52.226046 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:01:52.229082 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:01:52.232039 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:01:52.235075 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:01:52.236422 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:01:52.236467 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:01:52.243134 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:01:52.268317 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:01:52.273731 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:01:52.277729 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:01:52.281854 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:01:52.282466 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:01:52.286344 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:01:52.292473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:01:52.302900 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:01:52.309226 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:01:52.314837 jq[1553]: false Jul 7 06:01:52.314869 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:01:52.331162 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:01:52.333980 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:01:52.335927 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:01:52.336981 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:01:52.340614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:01:52.343465 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:01:52.354403 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:01:52.363532 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jul 7 06:01:52.365217 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jul 7 06:01:52.366184 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:01:52.366708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:01:52.367263 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:01:52.367663 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:01:52.373267 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:01:52.378154 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:01:52.379456 update_engine[1566]: I20250707 06:01:52.379298 1566 main.cc:92] Flatcar Update Engine starting Jul 7 06:01:52.394161 jq[1567]: true Jul 7 06:01:52.394925 extend-filesystems[1554]: Found /dev/vda6 Jul 7 06:01:52.398409 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Jul 7 06:01:52.398409 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:01:52.398409 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Jul 7 06:01:52.396826 oslogin_cache_refresh[1555]: Failure getting users, quitting Jul 7 06:01:52.396859 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:01:52.396945 oslogin_cache_refresh[1555]: Refreshing group entry cache Jul 7 06:01:52.406383 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Jul 7 06:01:52.406383 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:01:52.405324 oslogin_cache_refresh[1555]: Failure getting groups, quitting Jul 7 06:01:52.405343 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:01:52.411327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:01:52.418394 extend-filesystems[1554]: Found /dev/vda9 Jul 7 06:01:52.414049 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:01:52.414430 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:01:52.421545 extend-filesystems[1554]: Checking size of /dev/vda9 Jul 7 06:01:52.427342 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:01:52.434641 jq[1588]: true Jul 7 06:01:52.453309 dbus-daemon[1551]: [system] SELinux support is enabled Jul 7 06:01:52.453573 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:01:52.484142 tar[1575]: linux-amd64/LICENSE Jul 7 06:01:52.484142 tar[1575]: linux-amd64/helm Jul 7 06:01:52.484576 extend-filesystems[1554]: Resized partition /dev/vda9 Jul 7 06:01:52.459704 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:01:52.485759 update_engine[1566]: I20250707 06:01:52.458331 1566 update_check_scheduler.cc:74] Next update check in 5m22s Jul 7 06:01:52.460144 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:01:52.498951 extend-filesystems[1601]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:01:52.462507 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:01:52.462747 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:01:52.489587 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:01:52.535520 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:01:52.580838 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:01:52.609427 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:01:52.638454 bash[1615]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:01:52.638640 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:01:52.638640 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:01:52.638640 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:01:52.618322 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:01:52.655513 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Jul 7 06:01:52.619014 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:01:52.631043 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:01:52.658521 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:01:52.673315 kernel: kvm_amd: TSC scaling supported Jul 7 06:01:52.673399 kernel: kvm_amd: Nested Virtualization enabled Jul 7 06:01:52.673416 kernel: kvm_amd: Nested Paging enabled Jul 7 06:01:52.673470 kernel: kvm_amd: LBR virtualization supported Jul 7 06:01:52.682832 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 06:01:52.682967 kernel: kvm_amd: Virtual GIF supported Jul 7 06:01:52.739727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:01:52.741068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:01:52.756028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:01:52.855823 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:01:52.855867 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:01:52.907150 systemd-logind[1565]: New seat seat0. Jul 7 06:01:52.908165 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:01:52.915709 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:01:52.933446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:01:52.966844 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:01:53.031608 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:01:53.064197 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:01:53.067578 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:01:53.100319 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:01:53.100733 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:01:53.104818 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:01:53.144992 containerd[1584]: time="2025-07-07T06:01:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:01:53.148765 containerd[1584]: time="2025-07-07T06:01:53.148713606Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:01:53.154344 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:01:53.179109 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:01:53.182593 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:01:53.184421 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:01:53.193112 containerd[1584]: time="2025-07-07T06:01:53.193023493Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.569µs" Jul 7 06:01:53.193112 containerd[1584]: time="2025-07-07T06:01:53.193099155Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:01:53.193222 containerd[1584]: time="2025-07-07T06:01:53.193126897Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:01:53.193497 containerd[1584]: time="2025-07-07T06:01:53.193456866Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:01:53.193497 containerd[1584]: time="2025-07-07T06:01:53.193487303Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:01:53.193558 containerd[1584]: time="2025-07-07T06:01:53.193534381Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:01:53.193689 containerd[1584]: time="2025-07-07T06:01:53.193662501Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:01:53.193689 containerd[1584]: time="2025-07-07T06:01:53.193688741Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194234 containerd[1584]: time="2025-07-07T06:01:53.194144816Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194234 containerd[1584]: time="2025-07-07T06:01:53.194222512Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194508 containerd[1584]: time="2025-07-07T06:01:53.194243741Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194508 containerd[1584]: time="2025-07-07T06:01:53.194256736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194508 containerd[1584]: time="2025-07-07T06:01:53.194412829Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194761 containerd[1584]: time="2025-07-07T06:01:53.194733430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194786 containerd[1584]: time="2025-07-07T06:01:53.194770710Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:01:53.194786 containerd[1584]: time="2025-07-07T06:01:53.194780949Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:01:53.194852 containerd[1584]: time="2025-07-07T06:01:53.194837204Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:01:53.195128 containerd[1584]: time="2025-07-07T06:01:53.195093755Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:01:53.195228 containerd[1584]: time="2025-07-07T06:01:53.195188203Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:01:53.294905 containerd[1584]: time="2025-07-07T06:01:53.294755968Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.294928281Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.294954199Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.294974367Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.294994806Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.295011337Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.295025804Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.295040381Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.295067071Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:01:53.295076 containerd[1584]: time="2025-07-07T06:01:53.295081238Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:01:53.295317 containerd[1584]: time="2025-07-07T06:01:53.295095525Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:01:53.295317 containerd[1584]: time="2025-07-07T06:01:53.295114781Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:01:53.295753 containerd[1584]: time="2025-07-07T06:01:53.295699026Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:01:53.295753 containerd[1584]: time="2025-07-07T06:01:53.295744962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:01:53.295858 containerd[1584]: time="2025-07-07T06:01:53.295769669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:01:53.295858 containerd[1584]: time="2025-07-07T06:01:53.295817779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:01:53.295858 containerd[1584]: time="2025-07-07T06:01:53.295837245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:01:53.295858 containerd[1584]: time="2025-07-07T06:01:53.295854879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:01:53.295971 containerd[1584]: time="2025-07-07T06:01:53.295873313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:01:53.295971 containerd[1584]: time="2025-07-07T06:01:53.295889734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:01:53.295971 containerd[1584]: time="2025-07-07T06:01:53.295907016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:01:53.295971 containerd[1584]: time="2025-07-07T06:01:53.295924549Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:01:53.295971 containerd[1584]: time="2025-07-07T06:01:53.295941541Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:01:53.296113 containerd[1584]: time="2025-07-07T06:01:53.296078528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:01:53.296113 containerd[1584]: time="2025-07-07T06:01:53.296104427Z" level=info msg="Start snapshots syncer" Jul 7 06:01:53.296164 containerd[1584]: time="2025-07-07T06:01:53.296151785Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:01:53.296603 containerd[1584]: time="2025-07-07T06:01:53.296537479Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:01:53.296835 containerd[1584]: time="2025-07-07T06:01:53.296653777Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:01:53.296835 containerd[1584]: time="2025-07-07T06:01:53.296774163Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:01:53.296977 containerd[1584]: time="2025-07-07T06:01:53.296947728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:01:53.297009 containerd[1584]: time="2025-07-07T06:01:53.296983465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:01:53.297009 containerd[1584]: time="2025-07-07T06:01:53.297000016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:01:53.297088 containerd[1584]: time="2025-07-07T06:01:53.297017870Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:01:53.297088 containerd[1584]: time="2025-07-07T06:01:53.297035583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:01:53.297088 containerd[1584]: time="2025-07-07T06:01:53.297062764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:01:53.297088 containerd[1584]: time="2025-07-07T06:01:53.297078664Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:01:53.297183 containerd[1584]: time="2025-07-07T06:01:53.297113178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:01:53.297183 containerd[1584]: time="2025-07-07T06:01:53.297129319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:01:53.297183 containerd[1584]: time="2025-07-07T06:01:53.297144938Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:01:53.299352 containerd[1584]: time="2025-07-07T06:01:53.299306401Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:01:53.299412 containerd[1584]: time="2025-07-07T06:01:53.299359371Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:01:53.299412 containerd[1584]: time="2025-07-07T06:01:53.299372966Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:01:53.299412 containerd[1584]: time="2025-07-07T06:01:53.299386902Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:01:53.299412 containerd[1584]: time="2025-07-07T06:01:53.299398705Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:01:53.299412 containerd[1584]: time="2025-07-07T06:01:53.299412921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:01:53.299556 containerd[1584]: time="2025-07-07T06:01:53.299429232Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:01:53.299556 containerd[1584]: time="2025-07-07T06:01:53.299464979Z" level=info msg="runtime interface created" Jul 7 06:01:53.299556 containerd[1584]: time="2025-07-07T06:01:53.299514662Z" level=info msg="created NRI interface" Jul 7 06:01:53.299556 containerd[1584]: time="2025-07-07T06:01:53.299544538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:01:53.299656 containerd[1584]: time="2025-07-07T06:01:53.299567561Z" level=info msg="Connect containerd service" Jul 7 06:01:53.299656 containerd[1584]: time="2025-07-07T06:01:53.299617765Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:01:53.303145 containerd[1584]: time="2025-07-07T06:01:53.303084877Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:01:53.373387 tar[1575]: linux-amd64/README.md Jul 7 06:01:53.394925 systemd-networkd[1503]: eth0: Gained IPv6LL Jul 7 06:01:53.396655 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:01:53.398764 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:01:53.402147 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:01:53.405109 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:01:53.407711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:53.411964 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:01:53.442140 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:01:53.442431 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:01:53.444606 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:01:53.447232 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:01:53.488521 containerd[1584]: time="2025-07-07T06:01:53.488449482Z" level=info msg="Start subscribing containerd event" Jul 7 06:01:53.488521 containerd[1584]: time="2025-07-07T06:01:53.488525144Z" level=info msg="Start recovering state" Jul 7 06:01:53.488727 containerd[1584]: time="2025-07-07T06:01:53.488668753Z" level=info msg="Start event monitor" Jul 7 06:01:53.488727 containerd[1584]: time="2025-07-07T06:01:53.488689382Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:01:53.488727 containerd[1584]: time="2025-07-07T06:01:53.488698379Z" level=info msg="Start streaming server" Jul 7 06:01:53.488727 containerd[1584]: time="2025-07-07T06:01:53.488716703Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:01:53.488727 containerd[1584]: time="2025-07-07T06:01:53.488727083Z" level=info msg="runtime interface starting up..." Jul 7 06:01:53.488891 containerd[1584]: time="2025-07-07T06:01:53.488734366Z" level=info msg="starting plugins..." Jul 7 06:01:53.488891 containerd[1584]: time="2025-07-07T06:01:53.488754935Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:01:53.488891 containerd[1584]: time="2025-07-07T06:01:53.488848821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:01:53.488969 containerd[1584]: time="2025-07-07T06:01:53.488923962Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:01:53.489159 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:01:53.490169 containerd[1584]: time="2025-07-07T06:01:53.490140403Z" level=info msg="containerd successfully booted in 0.348642s" Jul 7 06:01:55.181677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:55.184229 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:01:55.186028 systemd[1]: Startup finished in 4.900s (kernel) + 8.027s (initrd) + 5.825s (userspace) = 18.753s. Jul 7 06:01:55.199420 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:01:55.727582 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:01:55.729216 systemd[1]: Started sshd@0-10.0.0.25:22-10.0.0.1:48424.service - OpenSSH per-connection server daemon (10.0.0.1:48424). Jul 7 06:01:55.806932 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 48424 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:01:55.809369 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:55.818426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:01:55.819964 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:01:55.829445 systemd-logind[1565]: New session 1 of user core. Jul 7 06:01:55.856572 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:01:55.861098 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:01:55.888890 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:01:55.892220 systemd-logind[1565]: New session c1 of user core. Jul 7 06:01:56.129578 systemd[1715]: Queued start job for default target default.target. Jul 7 06:01:56.141213 systemd[1715]: Created slice app.slice - User Application Slice. Jul 7 06:01:56.141243 systemd[1715]: Reached target paths.target - Paths. Jul 7 06:01:56.141290 systemd[1715]: Reached target timers.target - Timers. Jul 7 06:01:56.143107 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:01:56.155871 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:01:56.156014 systemd[1715]: Reached target sockets.target - Sockets. Jul 7 06:01:56.156055 systemd[1715]: Reached target basic.target - Basic System. Jul 7 06:01:56.156097 systemd[1715]: Reached target default.target - Main User Target. Jul 7 06:01:56.156140 systemd[1715]: Startup finished in 256ms. Jul 7 06:01:56.157099 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:01:56.158852 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:01:56.230019 systemd[1]: Started sshd@1-10.0.0.25:22-10.0.0.1:48426.service - OpenSSH per-connection server daemon (10.0.0.1:48426). Jul 7 06:01:56.287929 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 48426 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:01:56.290409 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:56.299728 systemd-logind[1565]: New session 2 of user core. Jul 7 06:01:56.309050 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:01:56.356630 kubelet[1700]: E0707 06:01:56.356529 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:01:56.360938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:01:56.361197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:01:56.361614 systemd[1]: kubelet.service: Consumed 2.179s CPU time, 272.1M memory peak. Jul 7 06:01:56.380297 sshd[1729]: Connection closed by 10.0.0.1 port 48426 Jul 7 06:01:56.380618 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:56.393197 systemd[1]: sshd@1-10.0.0.25:22-10.0.0.1:48426.service: Deactivated successfully. Jul 7 06:01:56.395212 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:01:56.396068 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:01:56.399271 systemd[1]: Started sshd@2-10.0.0.25:22-10.0.0.1:48432.service - OpenSSH per-connection server daemon (10.0.0.1:48432). Jul 7 06:01:56.400080 systemd-logind[1565]: Removed session 2. Jul 7 06:01:56.468401 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 48432 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:01:56.469816 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:56.474373 systemd-logind[1565]: New session 3 of user core. Jul 7 06:01:56.487933 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:01:56.538189 sshd[1739]: Connection closed by 10.0.0.1 port 48432 Jul 7 06:01:56.538575 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:56.552862 systemd[1]: sshd@2-10.0.0.25:22-10.0.0.1:48432.service: Deactivated successfully. Jul 7 06:01:56.554898 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:01:56.555768 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:01:56.559276 systemd[1]: Started sshd@3-10.0.0.25:22-10.0.0.1:48438.service - OpenSSH per-connection server daemon (10.0.0.1:48438). Jul 7 06:01:56.559957 systemd-logind[1565]: Removed session 3. Jul 7 06:01:56.617529 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 48438 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:01:56.619198 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:56.624228 systemd-logind[1565]: New session 4 of user core. Jul 7 06:01:56.634054 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:01:56.688010 sshd[1747]: Connection closed by 10.0.0.1 port 48438 Jul 7 06:01:56.688384 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:56.697309 systemd[1]: sshd@3-10.0.0.25:22-10.0.0.1:48438.service: Deactivated successfully. Jul 7 06:01:56.699203 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:01:56.700089 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:01:56.703363 systemd[1]: Started sshd@4-10.0.0.25:22-10.0.0.1:48444.service - OpenSSH per-connection server daemon (10.0.0.1:48444). Jul 7 06:01:56.703970 systemd-logind[1565]: Removed session 4. Jul 7 06:01:56.766772 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 48444 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:01:56.768613 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:56.773703 systemd-logind[1565]: New session 5 of user core. Jul 7 06:01:56.784004 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:01:56.843703 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:01:56.844046 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:01:56.861674 sudo[1756]: pam_unix(sudo:session): session closed for user root Jul 7 06:01:56.863629 sshd[1755]: Connection closed by 10.0.0.1 port 48444 Jul 7 06:01:56.864119 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:56.879125 systemd[1]: sshd@4-10.0.0.25:22-10.0.0.1:48444.service: Deactivated successfully. Jul 7 06:01:56.881003 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:01:56.881826 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:01:56.884628 systemd[1]: Started sshd@5-10.0.0.25:22-10.0.0.1:48446.service - OpenSSH per-connection server daemon (10.0.0.1:48446). Jul 7 06:01:56.885503 systemd-logind[1565]: Removed session 5. Jul 7 06:01:56.957704 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 48446 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:01:56.960347 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:56.965777 systemd-logind[1565]: New session 6 of user core. Jul 7 06:01:56.980109 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:01:57.036639 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:01:57.037097 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:01:57.049077 sudo[1766]: pam_unix(sudo:session): session closed for user root Jul 7 06:01:57.056757 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:01:57.057129 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:01:57.068720 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:01:57.128787 augenrules[1788]: No rules Jul 7 06:01:57.129959 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:01:57.130338 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:01:57.131617 sudo[1765]: pam_unix(sudo:session): session closed for user root Jul 7 06:01:57.133627 sshd[1764]: Connection closed by 10.0.0.1 port 48446 Jul 7 06:01:57.134019 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:57.144765 systemd[1]: sshd@5-10.0.0.25:22-10.0.0.1:48446.service: Deactivated successfully. Jul 7 06:01:57.147239 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:01:57.148099 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:01:57.150997 systemd[1]: Started sshd@6-10.0.0.25:22-10.0.0.1:48450.service - OpenSSH per-connection server daemon (10.0.0.1:48450). Jul 7 06:01:57.151601 systemd-logind[1565]: Removed session 6. Jul 7 06:01:57.201115 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 48450 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:01:57.202591 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:01:57.207442 systemd-logind[1565]: New session 7 of user core. Jul 7 06:01:57.220906 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:01:57.275107 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:01:57.275427 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:01:57.943014 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:01:57.973425 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:01:58.466521 dockerd[1821]: time="2025-07-07T06:01:58.466435884Z" level=info msg="Starting up" Jul 7 06:01:58.467524 dockerd[1821]: time="2025-07-07T06:01:58.467491353Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:02:00.826413 dockerd[1821]: time="2025-07-07T06:02:00.826326361Z" level=info msg="Loading containers: start." Jul 7 06:02:00.857828 kernel: Initializing XFRM netlink socket Jul 7 06:02:01.454019 systemd-networkd[1503]: docker0: Link UP Jul 7 06:02:01.560287 dockerd[1821]: time="2025-07-07T06:02:01.560199024Z" level=info msg="Loading containers: done." Jul 7 06:02:01.583526 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2458023937-merged.mount: Deactivated successfully. Jul 7 06:02:01.609613 dockerd[1821]: time="2025-07-07T06:02:01.609527572Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:02:01.609834 dockerd[1821]: time="2025-07-07T06:02:01.609676522Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:02:01.609968 dockerd[1821]: time="2025-07-07T06:02:01.609923385Z" level=info msg="Initializing buildkit" Jul 7 06:02:02.044820 dockerd[1821]: time="2025-07-07T06:02:02.044736925Z" level=info msg="Completed buildkit initialization" Jul 7 06:02:02.053988 dockerd[1821]: time="2025-07-07T06:02:02.053889460Z" level=info msg="Daemon has completed initialization" Jul 7 06:02:02.054167 dockerd[1821]: time="2025-07-07T06:02:02.054020345Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:02:02.054267 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:02:03.707148 containerd[1584]: time="2025-07-07T06:02:03.707056979Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 06:02:06.191608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1369446365.mount: Deactivated successfully. Jul 7 06:02:06.611601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:02:06.613615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:02:07.041136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:07.046091 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:02:07.662066 kubelet[2047]: E0707 06:02:07.661926 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:02:07.670778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:02:07.671020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:02:07.671425 systemd[1]: kubelet.service: Consumed 314ms CPU time, 109.2M memory peak. Jul 7 06:02:11.868276 containerd[1584]: time="2025-07-07T06:02:11.868184676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:11.869438 containerd[1584]: time="2025-07-07T06:02:11.869394896Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 06:02:11.870929 containerd[1584]: time="2025-07-07T06:02:11.870883397Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:11.873912 containerd[1584]: time="2025-07-07T06:02:11.873846705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:11.874989 containerd[1584]: time="2025-07-07T06:02:11.874951006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 8.167835587s" Jul 7 06:02:11.875058 containerd[1584]: time="2025-07-07T06:02:11.874990199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 06:02:11.876111 containerd[1584]: time="2025-07-07T06:02:11.876043164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 06:02:14.914837 containerd[1584]: time="2025-07-07T06:02:14.914717393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:14.916069 containerd[1584]: time="2025-07-07T06:02:14.915967487Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 06:02:14.917941 containerd[1584]: time="2025-07-07T06:02:14.917873612Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:14.922810 containerd[1584]: time="2025-07-07T06:02:14.922692128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:14.923922 containerd[1584]: time="2025-07-07T06:02:14.923868394Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 3.047786678s" Jul 7 06:02:14.923922 containerd[1584]: time="2025-07-07T06:02:14.923914350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 06:02:14.925033 containerd[1584]: time="2025-07-07T06:02:14.924986481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 06:02:16.776372 containerd[1584]: time="2025-07-07T06:02:16.776292150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:16.873802 containerd[1584]: time="2025-07-07T06:02:16.873668115Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 06:02:16.918505 containerd[1584]: time="2025-07-07T06:02:16.918421413Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:16.925871 containerd[1584]: time="2025-07-07T06:02:16.925816521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:16.926929 containerd[1584]: time="2025-07-07T06:02:16.926870568Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 2.001847057s" Jul 7 06:02:16.926929 containerd[1584]: time="2025-07-07T06:02:16.926909050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 06:02:16.927605 containerd[1584]: time="2025-07-07T06:02:16.927534924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 06:02:17.922118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:02:17.924950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:02:18.219831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:18.225079 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:02:18.574013 kubelet[2121]: E0707 06:02:18.573821 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:02:18.578021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:02:18.578228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:02:18.578656 systemd[1]: kubelet.service: Consumed 253ms CPU time, 111.2M memory peak. Jul 7 06:02:19.281728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726884585.mount: Deactivated successfully. Jul 7 06:02:21.220009 containerd[1584]: time="2025-07-07T06:02:21.219827144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:21.233808 containerd[1584]: time="2025-07-07T06:02:21.233684682Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 06:02:21.247320 containerd[1584]: time="2025-07-07T06:02:21.247232529Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:21.255728 containerd[1584]: time="2025-07-07T06:02:21.255558453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:21.256464 containerd[1584]: time="2025-07-07T06:02:21.256379743Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 4.328771111s" Jul 7 06:02:21.256464 containerd[1584]: time="2025-07-07T06:02:21.256447170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 06:02:21.257051 containerd[1584]: time="2025-07-07T06:02:21.257003784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 06:02:22.827851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383900350.mount: Deactivated successfully. Jul 7 06:02:24.776628 containerd[1584]: time="2025-07-07T06:02:24.776535061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:24.778065 containerd[1584]: time="2025-07-07T06:02:24.778000329Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 06:02:24.779839 containerd[1584]: time="2025-07-07T06:02:24.779747846Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:24.783729 containerd[1584]: time="2025-07-07T06:02:24.783647229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:24.784980 containerd[1584]: time="2025-07-07T06:02:24.784890290Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.527831924s" Jul 7 06:02:24.784980 containerd[1584]: time="2025-07-07T06:02:24.784951274Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 06:02:24.785506 containerd[1584]: time="2025-07-07T06:02:24.785473013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:02:26.031933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714047962.mount: Deactivated successfully. Jul 7 06:02:26.358772 containerd[1584]: time="2025-07-07T06:02:26.358575329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:02:26.404879 containerd[1584]: time="2025-07-07T06:02:26.404817744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:02:26.435268 containerd[1584]: time="2025-07-07T06:02:26.435188246Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:02:26.529390 containerd[1584]: time="2025-07-07T06:02:26.529272656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:02:26.530000 containerd[1584]: time="2025-07-07T06:02:26.529962335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.7444611s" Jul 7 06:02:26.530000 containerd[1584]: time="2025-07-07T06:02:26.529995681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:02:26.530723 containerd[1584]: time="2025-07-07T06:02:26.530667486Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 06:02:27.144514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124703990.mount: Deactivated successfully. Jul 7 06:02:28.675720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 06:02:28.677592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:02:29.249077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:29.273387 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:02:29.313730 kubelet[2221]: E0707 06:02:29.313607 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:02:29.317757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:02:29.318003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:02:29.318398 systemd[1]: kubelet.service: Consumed 217ms CPU time, 110.6M memory peak. Jul 7 06:02:32.569120 containerd[1584]: time="2025-07-07T06:02:32.568959684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:32.698319 containerd[1584]: time="2025-07-07T06:02:32.698229605Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 06:02:32.814241 containerd[1584]: time="2025-07-07T06:02:32.814170577Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:32.869725 containerd[1584]: time="2025-07-07T06:02:32.869452571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:32.870649 containerd[1584]: time="2025-07-07T06:02:32.870593774Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 6.339871742s" Jul 7 06:02:32.870649 containerd[1584]: time="2025-07-07T06:02:32.870654069Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 06:02:36.383984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:36.384218 systemd[1]: kubelet.service: Consumed 217ms CPU time, 110.6M memory peak. Jul 7 06:02:36.386970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:02:36.414896 systemd[1]: Reload requested from client PID 2296 ('systemctl') (unit session-7.scope)... Jul 7 06:02:36.414927 systemd[1]: Reloading... Jul 7 06:02:36.523905 zram_generator::config[2341]: No configuration found. Jul 7 06:02:36.683387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:02:36.834575 systemd[1]: Reloading finished in 419 ms. Jul 7 06:02:36.911043 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:02:36.911189 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:02:36.911589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:36.911677 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.1M memory peak. Jul 7 06:02:36.913662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:02:37.128456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:37.139138 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:02:37.206741 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:02:37.206741 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:02:37.206741 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:02:37.206741 kubelet[2386]: I0707 06:02:37.206584 2386 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:02:37.469137 update_engine[1566]: I20250707 06:02:37.468981 1566 update_attempter.cc:509] Updating boot flags... Jul 7 06:02:38.660154 kubelet[2386]: I0707 06:02:38.659971 2386 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:02:38.660154 kubelet[2386]: I0707 06:02:38.660043 2386 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:02:38.660709 kubelet[2386]: I0707 06:02:38.660379 2386 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:02:38.725829 kubelet[2386]: E0707 06:02:38.719338 2386 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:02:38.725829 kubelet[2386]: I0707 06:02:38.720001 2386 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:02:38.733829 kubelet[2386]: I0707 06:02:38.728052 2386 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:02:38.738663 kubelet[2386]: I0707 06:02:38.738600 2386 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:02:38.739094 kubelet[2386]: I0707 06:02:38.739047 2386 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:02:38.739324 kubelet[2386]: I0707 06:02:38.739088 2386 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:02:38.739512 kubelet[2386]: I0707 06:02:38.739333 2386 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:02:38.739512 kubelet[2386]: I0707 06:02:38.739346 2386 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:02:38.739576 kubelet[2386]: I0707 06:02:38.739553 2386 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:02:38.742234 kubelet[2386]: I0707 06:02:38.742198 2386 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:02:38.744029 kubelet[2386]: I0707 06:02:38.743990 2386 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:02:38.744089 kubelet[2386]: I0707 06:02:38.744052 2386 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:02:38.744130 kubelet[2386]: I0707 06:02:38.744098 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:02:38.752716 kubelet[2386]: E0707 06:02:38.752205 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:02:38.752716 kubelet[2386]: I0707 06:02:38.752338 2386 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:02:38.753223 kubelet[2386]: I0707 06:02:38.753198 2386 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:02:38.754094 kubelet[2386]: W0707 06:02:38.754077 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:02:38.754212 kubelet[2386]: E0707 06:02:38.754167 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:02:38.759352 kubelet[2386]: I0707 06:02:38.759020 2386 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:02:38.759352 kubelet[2386]: I0707 06:02:38.759095 2386 server.go:1289] "Started kubelet" Jul 7 06:02:38.761408 kubelet[2386]: I0707 06:02:38.761332 2386 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:02:38.766947 kubelet[2386]: I0707 06:02:38.766881 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:02:38.767195 kubelet[2386]: I0707 06:02:38.767112 2386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:02:38.808362 kubelet[2386]: I0707 06:02:38.771690 2386 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:02:38.808362 kubelet[2386]: I0707 06:02:38.781254 2386 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:02:38.810657 kubelet[2386]: I0707 06:02:38.810602 2386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:02:38.815640 kubelet[2386]: E0707 06:02:38.815578 2386 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:02:38.815867 kubelet[2386]: E0707 06:02:38.815816 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:02:38.815926 kubelet[2386]: I0707 06:02:38.815892 2386 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:02:38.816246 kubelet[2386]: E0707 06:02:38.812440 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe2cf00269c9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:02:38.759050394 +0000 UTC m=+1.612729198,LastTimestamp:2025-07-07 06:02:38.759050394 +0000 UTC m=+1.612729198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:02:38.816382 kubelet[2386]: I0707 06:02:38.816298 2386 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:02:38.816890 kubelet[2386]: I0707 06:02:38.816862 2386 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:02:38.817142 kubelet[2386]: E0707 06:02:38.817096 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="200ms" Jul 7 06:02:38.817192 kubelet[2386]: E0707 06:02:38.817120 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:02:38.819524 kubelet[2386]: I0707 06:02:38.819485 2386 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:02:38.819524 kubelet[2386]: I0707 06:02:38.819513 2386 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:02:38.819700 kubelet[2386]: I0707 06:02:38.819639 2386 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:02:38.841854 kubelet[2386]: I0707 06:02:38.841767 2386 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:02:38.843874 kubelet[2386]: I0707 06:02:38.843852 2386 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:02:38.843982 kubelet[2386]: I0707 06:02:38.843970 2386 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:02:38.844063 kubelet[2386]: I0707 06:02:38.844048 2386 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:02:38.844150 kubelet[2386]: I0707 06:02:38.844136 2386 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:02:38.844294 kubelet[2386]: E0707 06:02:38.844253 2386 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:02:38.845565 kubelet[2386]: E0707 06:02:38.845533 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:02:38.855740 kubelet[2386]: I0707 06:02:38.855688 2386 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:02:38.855740 kubelet[2386]: I0707 06:02:38.855725 2386 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:02:38.856003 kubelet[2386]: I0707 06:02:38.855759 2386 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:02:38.861120 kubelet[2386]: I0707 06:02:38.861073 2386 policy_none.go:49] "None policy: Start" Jul 7 06:02:38.861120 kubelet[2386]: I0707 06:02:38.861112 2386 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:02:38.861238 kubelet[2386]: I0707 06:02:38.861136 2386 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:02:38.907385 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:02:38.918448 kubelet[2386]: E0707 06:02:38.916730 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:02:38.924996 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:02:38.929040 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:02:38.937961 kubelet[2386]: E0707 06:02:38.937920 2386 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:02:38.938299 kubelet[2386]: I0707 06:02:38.938254 2386 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:02:38.938422 kubelet[2386]: I0707 06:02:38.938299 2386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:02:38.938648 kubelet[2386]: I0707 06:02:38.938603 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:02:38.940249 kubelet[2386]: E0707 06:02:38.940210 2386 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:02:38.940314 kubelet[2386]: E0707 06:02:38.940289 2386 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:02:39.040266 kubelet[2386]: E0707 06:02:39.017900 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="400ms" Jul 7 06:02:39.040266 kubelet[2386]: I0707 06:02:39.018053 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda50b2a981b1e04af7f0d6aff43b7c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda50b2a981b1e04af7f0d6aff43b7c3\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:39.040266 kubelet[2386]: I0707 06:02:39.018100 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda50b2a981b1e04af7f0d6aff43b7c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda50b2a981b1e04af7f0d6aff43b7c3\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:39.040525 kubelet[2386]: I0707 06:02:39.040344 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:39.043376 kubelet[2386]: I0707 06:02:39.040384 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:39.043503 kubelet[2386]: I0707 06:02:39.043486 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:39.043587 kubelet[2386]: I0707 06:02:39.043574 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:02:39.043679 kubelet[2386]: I0707 06:02:39.043663 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda50b2a981b1e04af7f0d6aff43b7c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda50b2a981b1e04af7f0d6aff43b7c3\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:39.043765 kubelet[2386]: I0707 06:02:39.043750 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:39.043866 kubelet[2386]: I0707 06:02:39.043852 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:39.046429 kubelet[2386]: I0707 06:02:39.046385 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:02:39.047104 kubelet[2386]: E0707 06:02:39.047079 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jul 7 06:02:39.049485 systemd[1]: Created slice kubepods-burstable-podcda50b2a981b1e04af7f0d6aff43b7c3.slice - libcontainer container kubepods-burstable-podcda50b2a981b1e04af7f0d6aff43b7c3.slice. Jul 7 06:02:39.063940 kubelet[2386]: E0707 06:02:39.063870 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:39.068395 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 7 06:02:39.082880 kubelet[2386]: E0707 06:02:39.082830 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:39.086956 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 7 06:02:39.089255 kubelet[2386]: E0707 06:02:39.089214 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:39.249651 kubelet[2386]: I0707 06:02:39.249569 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:02:39.250229 kubelet[2386]: E0707 06:02:39.250169 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jul 7 06:02:39.364778 kubelet[2386]: E0707 06:02:39.364707 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:39.365664 containerd[1584]: time="2025-07-07T06:02:39.365589480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda50b2a981b1e04af7f0d6aff43b7c3,Namespace:kube-system,Attempt:0,}" Jul 7 06:02:39.384448 kubelet[2386]: E0707 06:02:39.384330 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:39.385138 containerd[1584]: time="2025-07-07T06:02:39.385078287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 7 06:02:39.390635 kubelet[2386]: E0707 06:02:39.390591 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:39.391317 containerd[1584]: time="2025-07-07T06:02:39.391269998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 7 06:02:39.419516 kubelet[2386]: E0707 06:02:39.419450 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="800ms" Jul 7 06:02:39.652078 kubelet[2386]: I0707 06:02:39.651915 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:02:39.652526 kubelet[2386]: E0707 06:02:39.652449 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.25:6443/api/v1/nodes\": dial tcp 10.0.0.25:6443: connect: connection refused" node="localhost" Jul 7 06:02:39.843057 containerd[1584]: time="2025-07-07T06:02:39.843010015Z" level=info msg="connecting to shim bd22a92e63e0a5f290924c0f82493d8aecb8c7418781481204a9ed15d438da95" address="unix:///run/containerd/s/d9403a9e0414831a225fbda68f15daec27eac19a2ca40c5109bb0fbbeb43e26a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:39.852066 containerd[1584]: time="2025-07-07T06:02:39.851999705Z" level=info msg="connecting to shim e61830cbf226dc7638b9be227e42deafa5f6c93e02c75830ff307ba5712a0616" address="unix:///run/containerd/s/c85f258f1098954bf31960309e1b586a632890e836508f1b0dd6014ce1667751" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:39.862054 containerd[1584]: time="2025-07-07T06:02:39.861921140Z" level=info msg="connecting to shim 115ce22ddcbb5ec61075ee3527203e8fdffe27f3d2aa7cbc25905e4be2805d5d" address="unix:///run/containerd/s/6286d7952e6b13e26a28908dfc250b40a39bc5b8929c861324e16d4fb38858b6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:39.943162 systemd[1]: Started cri-containerd-e61830cbf226dc7638b9be227e42deafa5f6c93e02c75830ff307ba5712a0616.scope - libcontainer container e61830cbf226dc7638b9be227e42deafa5f6c93e02c75830ff307ba5712a0616. Jul 7 06:02:39.948691 systemd[1]: Started cri-containerd-bd22a92e63e0a5f290924c0f82493d8aecb8c7418781481204a9ed15d438da95.scope - libcontainer container bd22a92e63e0a5f290924c0f82493d8aecb8c7418781481204a9ed15d438da95. Jul 7 06:02:39.953451 systemd[1]: Started cri-containerd-115ce22ddcbb5ec61075ee3527203e8fdffe27f3d2aa7cbc25905e4be2805d5d.scope - libcontainer container 115ce22ddcbb5ec61075ee3527203e8fdffe27f3d2aa7cbc25905e4be2805d5d. Jul 7 06:02:40.036187 kubelet[2386]: E0707 06:02:40.036134 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:02:40.066910 containerd[1584]: time="2025-07-07T06:02:40.066731017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"e61830cbf226dc7638b9be227e42deafa5f6c93e02c75830ff307ba5712a0616\"" Jul 7 06:02:40.068400 kubelet[2386]: E0707 06:02:40.068375 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:40.076626 containerd[1584]: time="2025-07-07T06:02:40.076017686Z" level=info msg="CreateContainer within sandbox \"e61830cbf226dc7638b9be227e42deafa5f6c93e02c75830ff307ba5712a0616\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:02:40.076626 containerd[1584]: time="2025-07-07T06:02:40.076406166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"115ce22ddcbb5ec61075ee3527203e8fdffe27f3d2aa7cbc25905e4be2805d5d\"" Jul 7 06:02:40.078131 kubelet[2386]: E0707 06:02:40.078108 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:40.087319 containerd[1584]: time="2025-07-07T06:02:40.087254896Z" level=info msg="CreateContainer within sandbox \"115ce22ddcbb5ec61075ee3527203e8fdffe27f3d2aa7cbc25905e4be2805d5d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:02:40.090784 containerd[1584]: time="2025-07-07T06:02:40.090726582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda50b2a981b1e04af7f0d6aff43b7c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd22a92e63e0a5f290924c0f82493d8aecb8c7418781481204a9ed15d438da95\"" Jul 7 06:02:40.091703 kubelet[2386]: E0707 06:02:40.091676 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:40.096397 containerd[1584]: time="2025-07-07T06:02:40.096342367Z" level=info msg="CreateContainer within sandbox \"bd22a92e63e0a5f290924c0f82493d8aecb8c7418781481204a9ed15d438da95\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:02:40.097493 containerd[1584]: time="2025-07-07T06:02:40.097448712Z" level=info msg="Container 0404e45dc43e495fe78ef56fb214a6c89be88605494a2d5f8c5e55804914087d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:40.104221 containerd[1584]: time="2025-07-07T06:02:40.104165672Z" level=info msg="Container 92e1d113ed3008d106ef5ba16081d35864faeda638729576ecc91d1f6dabd03f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:40.113420 containerd[1584]: time="2025-07-07T06:02:40.112998838Z" level=info msg="Container db55849c924b34b1466afc35f8ee673ce64f0cdeab6bc250cab6ccbfe9778cfb: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:40.113420 containerd[1584]: time="2025-07-07T06:02:40.113245007Z" level=info msg="CreateContainer within sandbox \"e61830cbf226dc7638b9be227e42deafa5f6c93e02c75830ff307ba5712a0616\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0404e45dc43e495fe78ef56fb214a6c89be88605494a2d5f8c5e55804914087d\"" Jul 7 06:02:40.114179 containerd[1584]: time="2025-07-07T06:02:40.114117346Z" level=info msg="StartContainer for \"0404e45dc43e495fe78ef56fb214a6c89be88605494a2d5f8c5e55804914087d\"" Jul 7 06:02:40.115430 containerd[1584]: time="2025-07-07T06:02:40.115393024Z" level=info msg="connecting to shim 0404e45dc43e495fe78ef56fb214a6c89be88605494a2d5f8c5e55804914087d" address="unix:///run/containerd/s/c85f258f1098954bf31960309e1b586a632890e836508f1b0dd6014ce1667751" protocol=ttrpc version=3 Jul 7 06:02:40.122277 containerd[1584]: time="2025-07-07T06:02:40.122236384Z" level=info msg="CreateContainer within sandbox \"115ce22ddcbb5ec61075ee3527203e8fdffe27f3d2aa7cbc25905e4be2805d5d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"92e1d113ed3008d106ef5ba16081d35864faeda638729576ecc91d1f6dabd03f\"" Jul 7 06:02:40.123332 containerd[1584]: time="2025-07-07T06:02:40.123282024Z" level=info msg="StartContainer for \"92e1d113ed3008d106ef5ba16081d35864faeda638729576ecc91d1f6dabd03f\"" Jul 7 06:02:40.123988 containerd[1584]: time="2025-07-07T06:02:40.123911141Z" level=info msg="CreateContainer within sandbox \"bd22a92e63e0a5f290924c0f82493d8aecb8c7418781481204a9ed15d438da95\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db55849c924b34b1466afc35f8ee673ce64f0cdeab6bc250cab6ccbfe9778cfb\"" Jul 7 06:02:40.124482 containerd[1584]: time="2025-07-07T06:02:40.124457661Z" level=info msg="StartContainer for \"db55849c924b34b1466afc35f8ee673ce64f0cdeab6bc250cab6ccbfe9778cfb\"" Jul 7 06:02:40.125366 containerd[1584]: time="2025-07-07T06:02:40.125322997Z" level=info msg="connecting to shim 92e1d113ed3008d106ef5ba16081d35864faeda638729576ecc91d1f6dabd03f" address="unix:///run/containerd/s/6286d7952e6b13e26a28908dfc250b40a39bc5b8929c861324e16d4fb38858b6" protocol=ttrpc version=3 Jul 7 06:02:40.126882 containerd[1584]: time="2025-07-07T06:02:40.126851886Z" level=info msg="connecting to shim db55849c924b34b1466afc35f8ee673ce64f0cdeab6bc250cab6ccbfe9778cfb" address="unix:///run/containerd/s/d9403a9e0414831a225fbda68f15daec27eac19a2ca40c5109bb0fbbeb43e26a" protocol=ttrpc version=3 Jul 7 06:02:40.128829 kubelet[2386]: E0707 06:02:40.128357 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:02:40.142137 systemd[1]: Started cri-containerd-0404e45dc43e495fe78ef56fb214a6c89be88605494a2d5f8c5e55804914087d.scope - libcontainer container 0404e45dc43e495fe78ef56fb214a6c89be88605494a2d5f8c5e55804914087d. Jul 7 06:02:40.155268 systemd[1]: Started cri-containerd-db55849c924b34b1466afc35f8ee673ce64f0cdeab6bc250cab6ccbfe9778cfb.scope - libcontainer container db55849c924b34b1466afc35f8ee673ce64f0cdeab6bc250cab6ccbfe9778cfb. Jul 7 06:02:40.159602 systemd[1]: Started cri-containerd-92e1d113ed3008d106ef5ba16081d35864faeda638729576ecc91d1f6dabd03f.scope - libcontainer container 92e1d113ed3008d106ef5ba16081d35864faeda638729576ecc91d1f6dabd03f. Jul 7 06:02:40.220690 kubelet[2386]: E0707 06:02:40.220529 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.25:6443: connect: connection refused" interval="1.6s" Jul 7 06:02:40.226263 containerd[1584]: time="2025-07-07T06:02:40.226182919Z" level=info msg="StartContainer for \"0404e45dc43e495fe78ef56fb214a6c89be88605494a2d5f8c5e55804914087d\" returns successfully" Jul 7 06:02:40.236776 containerd[1584]: time="2025-07-07T06:02:40.236717683Z" level=info msg="StartContainer for \"db55849c924b34b1466afc35f8ee673ce64f0cdeab6bc250cab6ccbfe9778cfb\" returns successfully" Jul 7 06:02:40.243448 containerd[1584]: time="2025-07-07T06:02:40.243261413Z" level=info msg="StartContainer for \"92e1d113ed3008d106ef5ba16081d35864faeda638729576ecc91d1f6dabd03f\" returns successfully" Jul 7 06:02:40.249339 kubelet[2386]: E0707 06:02:40.249003 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:02:40.457056 kubelet[2386]: I0707 06:02:40.456992 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:02:40.861996 kubelet[2386]: E0707 06:02:40.861876 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:40.863806 kubelet[2386]: E0707 06:02:40.863090 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:40.863806 kubelet[2386]: E0707 06:02:40.863431 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:40.863806 kubelet[2386]: E0707 06:02:40.863513 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:40.864197 kubelet[2386]: E0707 06:02:40.864170 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:40.864338 kubelet[2386]: E0707 06:02:40.864315 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:41.842419 kubelet[2386]: E0707 06:02:41.842357 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:02:41.868094 kubelet[2386]: E0707 06:02:41.867457 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:41.868094 kubelet[2386]: E0707 06:02:41.867657 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:41.868094 kubelet[2386]: E0707 06:02:41.868070 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:02:41.868331 kubelet[2386]: E0707 06:02:41.868193 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:41.919579 kubelet[2386]: I0707 06:02:41.919537 2386 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:02:41.919848 kubelet[2386]: E0707 06:02:41.919822 2386 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:02:42.017493 kubelet[2386]: I0707 06:02:42.017431 2386 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:02:42.079411 kubelet[2386]: E0707 06:02:42.079099 2386 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 06:02:42.079411 kubelet[2386]: I0707 06:02:42.079157 2386 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:42.082571 kubelet[2386]: E0707 06:02:42.082430 2386 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:42.082571 kubelet[2386]: I0707 06:02:42.082584 2386 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:42.085264 kubelet[2386]: E0707 06:02:42.085209 2386 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:42.751727 kubelet[2386]: I0707 06:02:42.751690 2386 apiserver.go:52] "Watching apiserver" Jul 7 06:02:42.817479 kubelet[2386]: I0707 06:02:42.817393 2386 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:02:43.338685 kubelet[2386]: I0707 06:02:43.338614 2386 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:43.345373 kubelet[2386]: E0707 06:02:43.345273 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:43.872324 kubelet[2386]: E0707 06:02:43.872280 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:44.092549 systemd[1]: Reload requested from client PID 2689 ('systemctl') (unit session-7.scope)... Jul 7 06:02:44.092572 systemd[1]: Reloading... Jul 7 06:02:44.223837 zram_generator::config[2732]: No configuration found. Jul 7 06:02:44.471856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:02:44.670387 systemd[1]: Reloading finished in 577 ms. Jul 7 06:02:44.704912 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:02:44.727352 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:02:44.727805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:44.727871 systemd[1]: kubelet.service: Consumed 1.710s CPU time, 133.4M memory peak. Jul 7 06:02:44.730176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:02:44.956277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:02:44.966354 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:02:45.011737 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:02:45.011737 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:02:45.011737 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:02:45.011737 kubelet[2777]: I0707 06:02:45.011408 2777 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:02:45.022846 kubelet[2777]: I0707 06:02:45.022776 2777 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:02:45.022846 kubelet[2777]: I0707 06:02:45.022829 2777 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:02:45.023069 kubelet[2777]: I0707 06:02:45.023053 2777 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:02:45.024540 kubelet[2777]: I0707 06:02:45.024474 2777 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 06:02:45.027364 kubelet[2777]: I0707 06:02:45.027319 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:02:45.031837 kubelet[2777]: I0707 06:02:45.031811 2777 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:02:45.039821 kubelet[2777]: I0707 06:02:45.039747 2777 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:02:45.040249 kubelet[2777]: I0707 06:02:45.040195 2777 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:02:45.040496 kubelet[2777]: I0707 06:02:45.040224 2777 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:02:45.040687 kubelet[2777]: I0707 06:02:45.040630 2777 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:02:45.040687 kubelet[2777]: I0707 06:02:45.040682 2777 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:02:45.043461 kubelet[2777]: I0707 06:02:45.043370 2777 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:02:45.043908 kubelet[2777]: I0707 06:02:45.043887 2777 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:02:45.043908 kubelet[2777]: I0707 06:02:45.043910 2777 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:02:45.044043 kubelet[2777]: I0707 06:02:45.043937 2777 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:02:45.044043 kubelet[2777]: I0707 06:02:45.043956 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:02:45.045906 kubelet[2777]: I0707 06:02:45.045773 2777 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:02:45.046754 kubelet[2777]: I0707 06:02:45.046720 2777 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:02:45.054289 kubelet[2777]: I0707 06:02:45.054154 2777 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:02:45.054289 kubelet[2777]: I0707 06:02:45.054265 2777 server.go:1289] "Started kubelet" Jul 7 06:02:45.055742 kubelet[2777]: I0707 06:02:45.055647 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:02:45.056095 kubelet[2777]: I0707 06:02:45.056083 2777 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:02:45.057024 kubelet[2777]: I0707 06:02:45.056148 2777 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:02:45.058364 kubelet[2777]: I0707 06:02:45.058335 2777 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:02:45.060105 kubelet[2777]: I0707 06:02:45.060078 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:02:45.061584 kubelet[2777]: E0707 06:02:45.061019 2777 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:02:45.061584 kubelet[2777]: I0707 06:02:45.061547 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:02:45.062375 kubelet[2777]: I0707 06:02:45.062354 2777 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:02:45.063301 kubelet[2777]: I0707 06:02:45.063277 2777 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:02:45.063679 kubelet[2777]: I0707 06:02:45.063663 2777 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:02:45.065366 kubelet[2777]: I0707 06:02:45.065340 2777 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:02:45.065608 kubelet[2777]: I0707 06:02:45.065579 2777 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:02:45.068873 kubelet[2777]: I0707 06:02:45.067727 2777 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:02:45.094331 kubelet[2777]: I0707 06:02:45.094250 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:02:45.096353 kubelet[2777]: I0707 06:02:45.096307 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:02:45.096460 kubelet[2777]: I0707 06:02:45.096362 2777 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:02:45.096460 kubelet[2777]: I0707 06:02:45.096424 2777 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:02:45.096460 kubelet[2777]: I0707 06:02:45.096434 2777 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:02:45.096576 kubelet[2777]: E0707 06:02:45.096516 2777 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:02:45.134102 kubelet[2777]: I0707 06:02:45.134062 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:02:45.134102 kubelet[2777]: I0707 06:02:45.134083 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:02:45.134102 kubelet[2777]: I0707 06:02:45.134102 2777 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:02:45.134427 kubelet[2777]: I0707 06:02:45.134252 2777 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:02:45.134427 kubelet[2777]: I0707 06:02:45.134278 2777 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:02:45.134427 kubelet[2777]: I0707 06:02:45.134309 2777 policy_none.go:49] "None policy: Start" Jul 7 06:02:45.134427 kubelet[2777]: I0707 06:02:45.134320 2777 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:02:45.134427 kubelet[2777]: I0707 06:02:45.134335 2777 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:02:45.134627 kubelet[2777]: I0707 06:02:45.134437 2777 state_mem.go:75] "Updated machine memory state" Jul 7 06:02:45.139767 kubelet[2777]: E0707 06:02:45.139732 2777 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:02:45.140103 kubelet[2777]: I0707 06:02:45.139958 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:02:45.140103 kubelet[2777]: I0707 06:02:45.140067 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:02:45.140312 kubelet[2777]: I0707 06:02:45.140293 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:02:45.142236 kubelet[2777]: E0707 06:02:45.142207 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:02:45.198369 kubelet[2777]: I0707 06:02:45.198271 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:45.198369 kubelet[2777]: I0707 06:02:45.198337 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:02:45.198672 kubelet[2777]: I0707 06:02:45.198285 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:45.205259 kubelet[2777]: E0707 06:02:45.205217 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:45.250442 kubelet[2777]: I0707 06:02:45.250284 2777 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:02:45.264281 kubelet[2777]: I0707 06:02:45.264209 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda50b2a981b1e04af7f0d6aff43b7c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda50b2a981b1e04af7f0d6aff43b7c3\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:45.264281 kubelet[2777]: I0707 06:02:45.264258 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:45.264281 kubelet[2777]: I0707 06:02:45.264276 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:45.264281 kubelet[2777]: I0707 06:02:45.264293 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:45.264585 kubelet[2777]: I0707 06:02:45.264343 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:02:45.264585 kubelet[2777]: I0707 06:02:45.264378 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:45.264585 kubelet[2777]: I0707 06:02:45.264418 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:02:45.264585 kubelet[2777]: I0707 06:02:45.264479 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda50b2a981b1e04af7f0d6aff43b7c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda50b2a981b1e04af7f0d6aff43b7c3\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:45.264585 kubelet[2777]: I0707 06:02:45.264497 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda50b2a981b1e04af7f0d6aff43b7c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda50b2a981b1e04af7f0d6aff43b7c3\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:45.506629 kubelet[2777]: E0707 06:02:45.506442 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:45.506629 kubelet[2777]: E0707 06:02:45.506502 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:45.506629 kubelet[2777]: E0707 06:02:45.506636 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:45.551505 kubelet[2777]: I0707 06:02:45.551448 2777 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:02:45.552336 kubelet[2777]: I0707 06:02:45.551748 2777 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:02:46.045104 kubelet[2777]: I0707 06:02:46.044896 2777 apiserver.go:52] "Watching apiserver" Jul 7 06:02:46.063742 kubelet[2777]: I0707 06:02:46.063672 2777 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:02:46.114055 kubelet[2777]: I0707 06:02:46.114005 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:02:46.114238 kubelet[2777]: I0707 06:02:46.114079 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:46.114276 kubelet[2777]: E0707 06:02:46.114249 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:46.124628 kubelet[2777]: E0707 06:02:46.124564 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:02:46.125446 kubelet[2777]: E0707 06:02:46.125419 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:46.125533 kubelet[2777]: E0707 06:02:46.125483 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:02:46.126344 kubelet[2777]: E0707 06:02:46.126315 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:46.146208 kubelet[2777]: I0707 06:02:46.146140 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.146118346 podStartE2EDuration="3.146118346s" podCreationTimestamp="2025-07-07 06:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:02:46.135391661 +0000 UTC m=+1.163242733" watchObservedRunningTime="2025-07-07 06:02:46.146118346 +0000 UTC m=+1.173969418" Jul 7 06:02:46.155962 kubelet[2777]: I0707 06:02:46.155890 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.155865857 podStartE2EDuration="1.155865857s" podCreationTimestamp="2025-07-07 06:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:02:46.146361697 +0000 UTC m=+1.174212769" watchObservedRunningTime="2025-07-07 06:02:46.155865857 +0000 UTC m=+1.183716939" Jul 7 06:02:46.156146 kubelet[2777]: I0707 06:02:46.156051 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1560442850000001 podStartE2EDuration="1.156044285s" podCreationTimestamp="2025-07-07 06:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:02:46.155997506 +0000 UTC m=+1.183848578" watchObservedRunningTime="2025-07-07 06:02:46.156044285 +0000 UTC m=+1.183895357" Jul 7 06:02:47.116535 kubelet[2777]: E0707 06:02:47.116477 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:47.117076 kubelet[2777]: E0707 06:02:47.116715 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:48.929967 kubelet[2777]: I0707 06:02:48.929919 2777 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:02:48.930466 containerd[1584]: time="2025-07-07T06:02:48.930291025Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:02:48.930754 kubelet[2777]: I0707 06:02:48.930488 2777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:02:48.931900 kubelet[2777]: E0707 06:02:48.931780 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:50.050508 systemd[1]: Created slice kubepods-besteffort-podd53bca13_bf51_4fcd_bf9b_987e8c862181.slice - libcontainer container kubepods-besteffort-podd53bca13_bf51_4fcd_bf9b_987e8c862181.slice. Jul 7 06:02:50.092274 systemd[1]: Created slice kubepods-besteffort-podae0c524e_56ba_41e9_8bba_98e22de70f06.slice - libcontainer container kubepods-besteffort-podae0c524e_56ba_41e9_8bba_98e22de70f06.slice. Jul 7 06:02:50.095652 kubelet[2777]: I0707 06:02:50.095505 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d53bca13-bf51-4fcd-bf9b-987e8c862181-kube-proxy\") pod \"kube-proxy-vsh42\" (UID: \"d53bca13-bf51-4fcd-bf9b-987e8c862181\") " pod="kube-system/kube-proxy-vsh42" Jul 7 06:02:50.096127 kubelet[2777]: I0707 06:02:50.095849 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rcp2\" (UniqueName: \"kubernetes.io/projected/ae0c524e-56ba-41e9-8bba-98e22de70f06-kube-api-access-9rcp2\") pod \"tigera-operator-747864d56d-pcrfw\" (UID: \"ae0c524e-56ba-41e9-8bba-98e22de70f06\") " pod="tigera-operator/tigera-operator-747864d56d-pcrfw" Jul 7 06:02:50.096127 kubelet[2777]: I0707 06:02:50.095908 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d53bca13-bf51-4fcd-bf9b-987e8c862181-xtables-lock\") pod \"kube-proxy-vsh42\" (UID: \"d53bca13-bf51-4fcd-bf9b-987e8c862181\") " pod="kube-system/kube-proxy-vsh42" Jul 7 06:02:50.096127 kubelet[2777]: I0707 06:02:50.096010 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae0c524e-56ba-41e9-8bba-98e22de70f06-var-lib-calico\") pod \"tigera-operator-747864d56d-pcrfw\" (UID: \"ae0c524e-56ba-41e9-8bba-98e22de70f06\") " pod="tigera-operator/tigera-operator-747864d56d-pcrfw" Jul 7 06:02:50.096127 kubelet[2777]: I0707 06:02:50.096056 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d53bca13-bf51-4fcd-bf9b-987e8c862181-lib-modules\") pod \"kube-proxy-vsh42\" (UID: \"d53bca13-bf51-4fcd-bf9b-987e8c862181\") " pod="kube-system/kube-proxy-vsh42" Jul 7 06:02:50.096127 kubelet[2777]: I0707 06:02:50.096086 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zr2v\" (UniqueName: \"kubernetes.io/projected/d53bca13-bf51-4fcd-bf9b-987e8c862181-kube-api-access-5zr2v\") pod \"kube-proxy-vsh42\" (UID: \"d53bca13-bf51-4fcd-bf9b-987e8c862181\") " pod="kube-system/kube-proxy-vsh42" Jul 7 06:02:50.364825 kubelet[2777]: E0707 06:02:50.364219 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:50.365465 containerd[1584]: time="2025-07-07T06:02:50.365412628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vsh42,Uid:d53bca13-bf51-4fcd-bf9b-987e8c862181,Namespace:kube-system,Attempt:0,}" Jul 7 06:02:50.396931 containerd[1584]: time="2025-07-07T06:02:50.396873519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-pcrfw,Uid:ae0c524e-56ba-41e9-8bba-98e22de70f06,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:02:50.466553 containerd[1584]: time="2025-07-07T06:02:50.465909033Z" level=info msg="connecting to shim 928a85249d15e2848e72fc0ebac6415791684bcb7ed645cd727482be7ea288f5" address="unix:///run/containerd/s/1229bbb4accc9f648e7b22724caf59599b359bfb15c40d649e9cb7ec86606918" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:50.474542 containerd[1584]: time="2025-07-07T06:02:50.474474189Z" level=info msg="connecting to shim 1e866ecbd9151895598dc5f32d8985ef57493b2fc2748855548be7355a3ffdf3" address="unix:///run/containerd/s/97c245e40695059faa6e62095f69d98d381e27eb989850f4b3d47816b60d3272" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:50.499050 systemd[1]: Started cri-containerd-928a85249d15e2848e72fc0ebac6415791684bcb7ed645cd727482be7ea288f5.scope - libcontainer container 928a85249d15e2848e72fc0ebac6415791684bcb7ed645cd727482be7ea288f5. Jul 7 06:02:50.503492 systemd[1]: Started cri-containerd-1e866ecbd9151895598dc5f32d8985ef57493b2fc2748855548be7355a3ffdf3.scope - libcontainer container 1e866ecbd9151895598dc5f32d8985ef57493b2fc2748855548be7355a3ffdf3. Jul 7 06:02:50.581252 containerd[1584]: time="2025-07-07T06:02:50.581191980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vsh42,Uid:d53bca13-bf51-4fcd-bf9b-987e8c862181,Namespace:kube-system,Attempt:0,} returns sandbox id \"928a85249d15e2848e72fc0ebac6415791684bcb7ed645cd727482be7ea288f5\"" Jul 7 06:02:50.582484 kubelet[2777]: E0707 06:02:50.582443 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:50.593420 containerd[1584]: time="2025-07-07T06:02:50.593153954Z" level=info msg="CreateContainer within sandbox \"928a85249d15e2848e72fc0ebac6415791684bcb7ed645cd727482be7ea288f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:02:50.630316 containerd[1584]: time="2025-07-07T06:02:50.630112103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-pcrfw,Uid:ae0c524e-56ba-41e9-8bba-98e22de70f06,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1e866ecbd9151895598dc5f32d8985ef57493b2fc2748855548be7355a3ffdf3\"" Jul 7 06:02:50.632041 containerd[1584]: time="2025-07-07T06:02:50.631996764Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:02:50.641011 containerd[1584]: time="2025-07-07T06:02:50.640951075Z" level=info msg="Container e9b7446e7ee2d39720d7614fa87ba795f509627ae605ab320b5322af8840d7a3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:50.652833 containerd[1584]: time="2025-07-07T06:02:50.652722309Z" level=info msg="CreateContainer within sandbox \"928a85249d15e2848e72fc0ebac6415791684bcb7ed645cd727482be7ea288f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9b7446e7ee2d39720d7614fa87ba795f509627ae605ab320b5322af8840d7a3\"" Jul 7 06:02:50.653896 containerd[1584]: time="2025-07-07T06:02:50.653865870Z" level=info msg="StartContainer for \"e9b7446e7ee2d39720d7614fa87ba795f509627ae605ab320b5322af8840d7a3\"" Jul 7 06:02:50.655741 containerd[1584]: time="2025-07-07T06:02:50.655709814Z" level=info msg="connecting to shim e9b7446e7ee2d39720d7614fa87ba795f509627ae605ab320b5322af8840d7a3" address="unix:///run/containerd/s/1229bbb4accc9f648e7b22724caf59599b359bfb15c40d649e9cb7ec86606918" protocol=ttrpc version=3 Jul 7 06:02:50.683044 systemd[1]: Started cri-containerd-e9b7446e7ee2d39720d7614fa87ba795f509627ae605ab320b5322af8840d7a3.scope - libcontainer container e9b7446e7ee2d39720d7614fa87ba795f509627ae605ab320b5322af8840d7a3. Jul 7 06:02:50.746594 containerd[1584]: time="2025-07-07T06:02:50.746510147Z" level=info msg="StartContainer for \"e9b7446e7ee2d39720d7614fa87ba795f509627ae605ab320b5322af8840d7a3\" returns successfully" Jul 7 06:02:50.917527 kubelet[2777]: E0707 06:02:50.917357 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:51.127971 kubelet[2777]: E0707 06:02:51.127612 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:51.127971 kubelet[2777]: E0707 06:02:51.127652 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:51.139697 kubelet[2777]: I0707 06:02:51.139328 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vsh42" podStartSLOduration=2.139307011 podStartE2EDuration="2.139307011s" podCreationTimestamp="2025-07-07 06:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:02:51.139045847 +0000 UTC m=+6.166896919" watchObservedRunningTime="2025-07-07 06:02:51.139307011 +0000 UTC m=+6.167158073" Jul 7 06:02:52.066010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298175693.mount: Deactivated successfully. Jul 7 06:02:52.131436 kubelet[2777]: E0707 06:02:52.131391 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:52.606702 containerd[1584]: time="2025-07-07T06:02:52.606601516Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:52.607561 containerd[1584]: time="2025-07-07T06:02:52.607528265Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:02:52.608752 containerd[1584]: time="2025-07-07T06:02:52.608722900Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:52.611221 containerd[1584]: time="2025-07-07T06:02:52.611077456Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:52.611936 containerd[1584]: time="2025-07-07T06:02:52.611887465Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.979836347s" Jul 7 06:02:52.612000 containerd[1584]: time="2025-07-07T06:02:52.611938551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:02:52.617230 containerd[1584]: time="2025-07-07T06:02:52.617165580Z" level=info msg="CreateContainer within sandbox \"1e866ecbd9151895598dc5f32d8985ef57493b2fc2748855548be7355a3ffdf3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:02:52.627270 containerd[1584]: time="2025-07-07T06:02:52.627192179Z" level=info msg="Container 1f5482c568046d913d649e0952b30e6570d5e4c9049c62ad87c6e3acbcb99e6a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:52.631068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302839085.mount: Deactivated successfully. Jul 7 06:02:52.634769 containerd[1584]: time="2025-07-07T06:02:52.634722967Z" level=info msg="CreateContainer within sandbox \"1e866ecbd9151895598dc5f32d8985ef57493b2fc2748855548be7355a3ffdf3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1f5482c568046d913d649e0952b30e6570d5e4c9049c62ad87c6e3acbcb99e6a\"" Jul 7 06:02:52.635477 containerd[1584]: time="2025-07-07T06:02:52.635423890Z" level=info msg="StartContainer for \"1f5482c568046d913d649e0952b30e6570d5e4c9049c62ad87c6e3acbcb99e6a\"" Jul 7 06:02:52.636774 containerd[1584]: time="2025-07-07T06:02:52.636726490Z" level=info msg="connecting to shim 1f5482c568046d913d649e0952b30e6570d5e4c9049c62ad87c6e3acbcb99e6a" address="unix:///run/containerd/s/97c245e40695059faa6e62095f69d98d381e27eb989850f4b3d47816b60d3272" protocol=ttrpc version=3 Jul 7 06:02:52.695964 systemd[1]: Started cri-containerd-1f5482c568046d913d649e0952b30e6570d5e4c9049c62ad87c6e3acbcb99e6a.scope - libcontainer container 1f5482c568046d913d649e0952b30e6570d5e4c9049c62ad87c6e3acbcb99e6a. Jul 7 06:02:52.730040 containerd[1584]: time="2025-07-07T06:02:52.729920053Z" level=info msg="StartContainer for \"1f5482c568046d913d649e0952b30e6570d5e4c9049c62ad87c6e3acbcb99e6a\" returns successfully" Jul 7 06:02:53.143559 kubelet[2777]: I0707 06:02:53.143374 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-pcrfw" podStartSLOduration=1.16195528 podStartE2EDuration="3.143353669s" podCreationTimestamp="2025-07-07 06:02:50 +0000 UTC" firstStartedPulling="2025-07-07 06:02:50.63146009 +0000 UTC m=+5.659311182" lastFinishedPulling="2025-07-07 06:02:52.612858499 +0000 UTC m=+7.640709571" observedRunningTime="2025-07-07 06:02:53.142995684 +0000 UTC m=+8.170846756" watchObservedRunningTime="2025-07-07 06:02:53.143353669 +0000 UTC m=+8.171204741" Jul 7 06:02:54.579640 kubelet[2777]: E0707 06:02:54.579176 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:55.138571 kubelet[2777]: E0707 06:02:55.138456 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:58.940841 kubelet[2777]: E0707 06:02:58.939843 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:59.353011 sudo[1800]: pam_unix(sudo:session): session closed for user root Jul 7 06:02:59.355431 sshd[1799]: Connection closed by 10.0.0.1 port 48450 Jul 7 06:02:59.356996 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:59.363732 systemd[1]: sshd@6-10.0.0.25:22-10.0.0.1:48450.service: Deactivated successfully. Jul 7 06:02:59.369691 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:02:59.370098 systemd[1]: session-7.scope: Consumed 6.831s CPU time, 221.7M memory peak. Jul 7 06:02:59.372254 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:02:59.378031 systemd-logind[1565]: Removed session 7. Jul 7 06:03:03.157520 systemd[1]: Created slice kubepods-besteffort-pode44987c6_a9d7_42e8_9d51_960faa1f423d.slice - libcontainer container kubepods-besteffort-pode44987c6_a9d7_42e8_9d51_960faa1f423d.slice. Jul 7 06:03:03.178400 kubelet[2777]: I0707 06:03:03.178320 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e44987c6-a9d7-42e8-9d51-960faa1f423d-typha-certs\") pod \"calico-typha-66577b6f85-zfvbr\" (UID: \"e44987c6-a9d7-42e8-9d51-960faa1f423d\") " pod="calico-system/calico-typha-66577b6f85-zfvbr" Jul 7 06:03:03.178400 kubelet[2777]: I0707 06:03:03.178387 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5gt\" (UniqueName: \"kubernetes.io/projected/e44987c6-a9d7-42e8-9d51-960faa1f423d-kube-api-access-wj5gt\") pod \"calico-typha-66577b6f85-zfvbr\" (UID: \"e44987c6-a9d7-42e8-9d51-960faa1f423d\") " pod="calico-system/calico-typha-66577b6f85-zfvbr" Jul 7 06:03:03.178400 kubelet[2777]: I0707 06:03:03.178424 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e44987c6-a9d7-42e8-9d51-960faa1f423d-tigera-ca-bundle\") pod \"calico-typha-66577b6f85-zfvbr\" (UID: \"e44987c6-a9d7-42e8-9d51-960faa1f423d\") " pod="calico-system/calico-typha-66577b6f85-zfvbr" Jul 7 06:03:03.464593 kubelet[2777]: E0707 06:03:03.464540 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:03.465399 containerd[1584]: time="2025-07-07T06:03:03.465333357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66577b6f85-zfvbr,Uid:e44987c6-a9d7-42e8-9d51-960faa1f423d,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:03.521524 containerd[1584]: time="2025-07-07T06:03:03.521448620Z" level=info msg="connecting to shim 6dbbca98d758604a5976cdecb66feff68e25185e441bde82e5448585eee7edee" address="unix:///run/containerd/s/c378c2a8640f44cc59e4b3b99397ebdd510b20e013669e3c8c4dade098f7136f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:03.550699 systemd[1]: Created slice kubepods-besteffort-pod83c5f7ef_1f9f_4b87_80ac_7a74edd53d64.slice - libcontainer container kubepods-besteffort-pod83c5f7ef_1f9f_4b87_80ac_7a74edd53d64.slice. Jul 7 06:03:03.568145 systemd[1]: Started cri-containerd-6dbbca98d758604a5976cdecb66feff68e25185e441bde82e5448585eee7edee.scope - libcontainer container 6dbbca98d758604a5976cdecb66feff68e25185e441bde82e5448585eee7edee. Jul 7 06:03:03.626128 containerd[1584]: time="2025-07-07T06:03:03.626068407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66577b6f85-zfvbr,Uid:e44987c6-a9d7-42e8-9d51-960faa1f423d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6dbbca98d758604a5976cdecb66feff68e25185e441bde82e5448585eee7edee\"" Jul 7 06:03:03.627096 kubelet[2777]: E0707 06:03:03.627062 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:03.627944 containerd[1584]: time="2025-07-07T06:03:03.627863024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:03:03.682351 kubelet[2777]: I0707 06:03:03.682271 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-cni-log-dir\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682351 kubelet[2777]: I0707 06:03:03.682332 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-node-certs\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682351 kubelet[2777]: I0707 06:03:03.682353 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-var-lib-calico\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682665 kubelet[2777]: I0707 06:03:03.682376 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-cni-bin-dir\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682665 kubelet[2777]: I0707 06:03:03.682440 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-flexvol-driver-host\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682665 kubelet[2777]: I0707 06:03:03.682469 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-lib-modules\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682665 kubelet[2777]: I0707 06:03:03.682542 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-tigera-ca-bundle\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682665 kubelet[2777]: I0707 06:03:03.682585 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-var-run-calico\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682892 kubelet[2777]: I0707 06:03:03.682684 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-xtables-lock\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682892 kubelet[2777]: I0707 06:03:03.682741 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-cni-net-dir\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682892 kubelet[2777]: I0707 06:03:03.682766 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-policysync\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.682892 kubelet[2777]: I0707 06:03:03.682837 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj79b\" (UniqueName: \"kubernetes.io/projected/83c5f7ef-1f9f-4b87-80ac-7a74edd53d64-kube-api-access-gj79b\") pod \"calico-node-vxh67\" (UID: \"83c5f7ef-1f9f-4b87-80ac-7a74edd53d64\") " pod="calico-system/calico-node-vxh67" Jul 7 06:03:03.792322 kubelet[2777]: E0707 06:03:03.791351 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:03.792322 kubelet[2777]: E0707 06:03:03.792128 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.792322 kubelet[2777]: W0707 06:03:03.792162 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.793576 kubelet[2777]: E0707 06:03:03.793479 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.807760 kubelet[2777]: E0707 06:03:03.807706 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.807760 kubelet[2777]: W0707 06:03:03.807740 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.807760 kubelet[2777]: E0707 06:03:03.807766 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.855081 containerd[1584]: time="2025-07-07T06:03:03.855011073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vxh67,Uid:83c5f7ef-1f9f-4b87-80ac-7a74edd53d64,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:03.884126 kubelet[2777]: E0707 06:03:03.884065 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.884126 kubelet[2777]: W0707 06:03:03.884094 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.884126 kubelet[2777]: E0707 06:03:03.884120 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.884411 kubelet[2777]: E0707 06:03:03.884395 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.884411 kubelet[2777]: W0707 06:03:03.884407 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.884527 kubelet[2777]: E0707 06:03:03.884421 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.884760 kubelet[2777]: E0707 06:03:03.884726 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.884760 kubelet[2777]: W0707 06:03:03.884741 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.884760 kubelet[2777]: E0707 06:03:03.884753 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.885219 kubelet[2777]: E0707 06:03:03.885171 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.885219 kubelet[2777]: W0707 06:03:03.885202 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.885454 kubelet[2777]: E0707 06:03:03.885236 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.885597 kubelet[2777]: E0707 06:03:03.885575 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.885597 kubelet[2777]: W0707 06:03:03.885588 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.885674 kubelet[2777]: E0707 06:03:03.885609 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.885841 kubelet[2777]: E0707 06:03:03.885818 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.885841 kubelet[2777]: W0707 06:03:03.885834 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.886692 kubelet[2777]: E0707 06:03:03.885846 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.886692 kubelet[2777]: E0707 06:03:03.886173 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.886692 kubelet[2777]: W0707 06:03:03.886185 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.886692 kubelet[2777]: E0707 06:03:03.886237 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.886839 kubelet[2777]: E0707 06:03:03.886782 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.886839 kubelet[2777]: W0707 06:03:03.886817 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.886839 kubelet[2777]: E0707 06:03:03.886828 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.887288 kubelet[2777]: E0707 06:03:03.887245 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.887288 kubelet[2777]: W0707 06:03:03.887259 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.887986 kubelet[2777]: E0707 06:03:03.887270 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.888232 kubelet[2777]: E0707 06:03:03.888214 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.888232 kubelet[2777]: W0707 06:03:03.888226 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.888365 kubelet[2777]: E0707 06:03:03.888236 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.888916 kubelet[2777]: E0707 06:03:03.888887 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.888916 kubelet[2777]: W0707 06:03:03.888899 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.888916 kubelet[2777]: E0707 06:03:03.888909 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.890192 kubelet[2777]: E0707 06:03:03.890173 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.890192 kubelet[2777]: W0707 06:03:03.890186 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.890301 kubelet[2777]: E0707 06:03:03.890197 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.891689 kubelet[2777]: E0707 06:03:03.891654 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.891689 kubelet[2777]: W0707 06:03:03.891686 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.891766 kubelet[2777]: E0707 06:03:03.891697 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.892240 kubelet[2777]: E0707 06:03:03.892215 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.892240 kubelet[2777]: W0707 06:03:03.892229 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.892240 kubelet[2777]: E0707 06:03:03.892239 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.892726 kubelet[2777]: E0707 06:03:03.892585 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.892726 kubelet[2777]: W0707 06:03:03.892598 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.892726 kubelet[2777]: E0707 06:03:03.892608 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.893742 kubelet[2777]: E0707 06:03:03.893723 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.893742 kubelet[2777]: W0707 06:03:03.893736 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.894021 kubelet[2777]: E0707 06:03:03.893746 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.894021 kubelet[2777]: E0707 06:03:03.894014 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.894129 kubelet[2777]: W0707 06:03:03.894025 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.894129 kubelet[2777]: E0707 06:03:03.894037 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.894296 kubelet[2777]: E0707 06:03:03.894271 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.894296 kubelet[2777]: W0707 06:03:03.894280 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.894296 kubelet[2777]: E0707 06:03:03.894288 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.894561 kubelet[2777]: E0707 06:03:03.894538 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.894561 kubelet[2777]: W0707 06:03:03.894557 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.894657 kubelet[2777]: E0707 06:03:03.894571 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.894714 containerd[1584]: time="2025-07-07T06:03:03.894659442Z" level=info msg="connecting to shim 14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e" address="unix:///run/containerd/s/d956b0f50c33b0ee1631407241365b7b055001e4e74b2cf1422572e96f2cd644" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:03.895012 kubelet[2777]: E0707 06:03:03.894994 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.895012 kubelet[2777]: W0707 06:03:03.895009 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.895012 kubelet[2777]: E0707 06:03:03.895021 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.895764 kubelet[2777]: E0707 06:03:03.895733 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.895764 kubelet[2777]: W0707 06:03:03.895749 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.895924 kubelet[2777]: E0707 06:03:03.895819 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.895924 kubelet[2777]: I0707 06:03:03.895890 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dde14f30-8111-4778-8695-ad893871cc92-socket-dir\") pod \"csi-node-driver-hrhmt\" (UID: \"dde14f30-8111-4778-8695-ad893871cc92\") " pod="calico-system/csi-node-driver-hrhmt" Jul 7 06:03:03.896269 kubelet[2777]: E0707 06:03:03.896250 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.896313 kubelet[2777]: W0707 06:03:03.896267 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.896350 kubelet[2777]: E0707 06:03:03.896313 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.896350 kubelet[2777]: I0707 06:03:03.896338 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dde14f30-8111-4778-8695-ad893871cc92-registration-dir\") pod \"csi-node-driver-hrhmt\" (UID: \"dde14f30-8111-4778-8695-ad893871cc92\") " pod="calico-system/csi-node-driver-hrhmt" Jul 7 06:03:03.896892 kubelet[2777]: E0707 06:03:03.896779 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.896892 kubelet[2777]: W0707 06:03:03.896850 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.896892 kubelet[2777]: E0707 06:03:03.896864 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.897232 kubelet[2777]: E0707 06:03:03.897180 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.897232 kubelet[2777]: W0707 06:03:03.897198 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.897232 kubelet[2777]: E0707 06:03:03.897210 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.897631 kubelet[2777]: E0707 06:03:03.897611 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.897631 kubelet[2777]: W0707 06:03:03.897629 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.897723 kubelet[2777]: E0707 06:03:03.897656 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.897723 kubelet[2777]: I0707 06:03:03.897699 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dde14f30-8111-4778-8695-ad893871cc92-kubelet-dir\") pod \"csi-node-driver-hrhmt\" (UID: \"dde14f30-8111-4778-8695-ad893871cc92\") " pod="calico-system/csi-node-driver-hrhmt" Jul 7 06:03:03.898124 kubelet[2777]: E0707 06:03:03.898099 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.898124 kubelet[2777]: W0707 06:03:03.898119 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.898207 kubelet[2777]: E0707 06:03:03.898135 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.898303 kubelet[2777]: I0707 06:03:03.898271 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dde14f30-8111-4778-8695-ad893871cc92-varrun\") pod \"csi-node-driver-hrhmt\" (UID: \"dde14f30-8111-4778-8695-ad893871cc92\") " pod="calico-system/csi-node-driver-hrhmt" Jul 7 06:03:03.898409 kubelet[2777]: E0707 06:03:03.898387 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.898409 kubelet[2777]: W0707 06:03:03.898401 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.898506 kubelet[2777]: E0707 06:03:03.898413 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.898809 kubelet[2777]: E0707 06:03:03.898771 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.898901 kubelet[2777]: W0707 06:03:03.898810 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.898901 kubelet[2777]: E0707 06:03:03.898824 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.899449 kubelet[2777]: E0707 06:03:03.899429 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.899449 kubelet[2777]: W0707 06:03:03.899445 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.899574 kubelet[2777]: E0707 06:03:03.899460 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.899574 kubelet[2777]: I0707 06:03:03.899498 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcvs\" (UniqueName: \"kubernetes.io/projected/dde14f30-8111-4778-8695-ad893871cc92-kube-api-access-7kcvs\") pod \"csi-node-driver-hrhmt\" (UID: \"dde14f30-8111-4778-8695-ad893871cc92\") " pod="calico-system/csi-node-driver-hrhmt" Jul 7 06:03:03.899831 kubelet[2777]: E0707 06:03:03.899764 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.899831 kubelet[2777]: W0707 06:03:03.899779 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.899831 kubelet[2777]: E0707 06:03:03.899826 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.900076 kubelet[2777]: E0707 06:03:03.900035 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.900076 kubelet[2777]: W0707 06:03:03.900045 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.900076 kubelet[2777]: E0707 06:03:03.900055 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.900275 kubelet[2777]: E0707 06:03:03.900247 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.900275 kubelet[2777]: W0707 06:03:03.900260 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.900275 kubelet[2777]: E0707 06:03:03.900269 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.900566 kubelet[2777]: E0707 06:03:03.900488 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.900566 kubelet[2777]: W0707 06:03:03.900500 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.900566 kubelet[2777]: E0707 06:03:03.900509 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.900731 kubelet[2777]: E0707 06:03:03.900714 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.900731 kubelet[2777]: W0707 06:03:03.900726 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.900731 kubelet[2777]: E0707 06:03:03.900735 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.901014 kubelet[2777]: E0707 06:03:03.900948 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:03.901014 kubelet[2777]: W0707 06:03:03.900958 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:03.901014 kubelet[2777]: E0707 06:03:03.900967 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:03.932135 systemd[1]: Started cri-containerd-14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e.scope - libcontainer container 14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e. Jul 7 06:03:03.968168 containerd[1584]: time="2025-07-07T06:03:03.968032998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vxh67,Uid:83c5f7ef-1f9f-4b87-80ac-7a74edd53d64,Namespace:calico-system,Attempt:0,} returns sandbox id \"14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e\"" Jul 7 06:03:04.002335 kubelet[2777]: E0707 06:03:04.002275 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.002335 kubelet[2777]: W0707 06:03:04.002312 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.002335 kubelet[2777]: E0707 06:03:04.002340 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.002609 kubelet[2777]: E0707 06:03:04.002590 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.002609 kubelet[2777]: W0707 06:03:04.002605 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.002690 kubelet[2777]: E0707 06:03:04.002618 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.003053 kubelet[2777]: E0707 06:03:04.003030 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.003053 kubelet[2777]: W0707 06:03:04.003045 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.003053 kubelet[2777]: E0707 06:03:04.003057 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.003326 kubelet[2777]: E0707 06:03:04.003279 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.003326 kubelet[2777]: W0707 06:03:04.003305 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.003326 kubelet[2777]: E0707 06:03:04.003318 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.003579 kubelet[2777]: E0707 06:03:04.003546 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.003579 kubelet[2777]: W0707 06:03:04.003562 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.003579 kubelet[2777]: E0707 06:03:04.003572 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.003884 kubelet[2777]: E0707 06:03:04.003861 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.003884 kubelet[2777]: W0707 06:03:04.003879 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.003958 kubelet[2777]: E0707 06:03:04.003892 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.004157 kubelet[2777]: E0707 06:03:04.004134 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.004157 kubelet[2777]: W0707 06:03:04.004148 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.004157 kubelet[2777]: E0707 06:03:04.004159 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.004368 kubelet[2777]: E0707 06:03:04.004351 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.004368 kubelet[2777]: W0707 06:03:04.004365 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.004441 kubelet[2777]: E0707 06:03:04.004386 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.004592 kubelet[2777]: E0707 06:03:04.004573 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.004592 kubelet[2777]: W0707 06:03:04.004585 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.004674 kubelet[2777]: E0707 06:03:04.004596 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.004861 kubelet[2777]: E0707 06:03:04.004841 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.004861 kubelet[2777]: W0707 06:03:04.004857 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.004960 kubelet[2777]: E0707 06:03:04.004869 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.005090 kubelet[2777]: E0707 06:03:04.005068 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.005090 kubelet[2777]: W0707 06:03:04.005082 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.005090 kubelet[2777]: E0707 06:03:04.005092 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.005307 kubelet[2777]: E0707 06:03:04.005288 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.005307 kubelet[2777]: W0707 06:03:04.005300 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.005398 kubelet[2777]: E0707 06:03:04.005311 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.006679 kubelet[2777]: E0707 06:03:04.006644 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.006679 kubelet[2777]: W0707 06:03:04.006660 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.006679 kubelet[2777]: E0707 06:03:04.006672 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.006932 kubelet[2777]: E0707 06:03:04.006907 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.006932 kubelet[2777]: W0707 06:03:04.006921 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.006932 kubelet[2777]: E0707 06:03:04.006932 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.007136 kubelet[2777]: E0707 06:03:04.007111 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.007136 kubelet[2777]: W0707 06:03:04.007125 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.007136 kubelet[2777]: E0707 06:03:04.007134 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.007372 kubelet[2777]: E0707 06:03:04.007349 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.007372 kubelet[2777]: W0707 06:03:04.007364 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.007372 kubelet[2777]: E0707 06:03:04.007384 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.007611 kubelet[2777]: E0707 06:03:04.007588 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.007611 kubelet[2777]: W0707 06:03:04.007603 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.007611 kubelet[2777]: E0707 06:03:04.007614 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.007827 kubelet[2777]: E0707 06:03:04.007783 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.007827 kubelet[2777]: W0707 06:03:04.007817 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.007827 kubelet[2777]: E0707 06:03:04.007825 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.008020 kubelet[2777]: E0707 06:03:04.008003 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.008020 kubelet[2777]: W0707 06:03:04.008014 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.008087 kubelet[2777]: E0707 06:03:04.008022 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.008212 kubelet[2777]: E0707 06:03:04.008194 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.008212 kubelet[2777]: W0707 06:03:04.008206 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.008261 kubelet[2777]: E0707 06:03:04.008215 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.008468 kubelet[2777]: E0707 06:03:04.008450 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.008468 kubelet[2777]: W0707 06:03:04.008463 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.008526 kubelet[2777]: E0707 06:03:04.008472 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.008885 kubelet[2777]: E0707 06:03:04.008732 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.008885 kubelet[2777]: W0707 06:03:04.008748 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.008885 kubelet[2777]: E0707 06:03:04.008759 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.009018 kubelet[2777]: E0707 06:03:04.008986 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.009018 kubelet[2777]: W0707 06:03:04.009001 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.009018 kubelet[2777]: E0707 06:03:04.009011 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.009301 kubelet[2777]: E0707 06:03:04.009284 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.009301 kubelet[2777]: W0707 06:03:04.009298 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.009385 kubelet[2777]: E0707 06:03:04.009310 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.009551 kubelet[2777]: E0707 06:03:04.009536 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.009551 kubelet[2777]: W0707 06:03:04.009550 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.009659 kubelet[2777]: E0707 06:03:04.009560 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:04.019400 kubelet[2777]: E0707 06:03:04.019334 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:04.019400 kubelet[2777]: W0707 06:03:04.019366 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:04.019400 kubelet[2777]: E0707 06:03:04.019404 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:05.097927 kubelet[2777]: E0707 06:03:05.097840 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:06.139446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392380863.mount: Deactivated successfully. Jul 7 06:03:06.953779 containerd[1584]: time="2025-07-07T06:03:06.953679046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:06.955568 containerd[1584]: time="2025-07-07T06:03:06.955509529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:03:06.957846 containerd[1584]: time="2025-07-07T06:03:06.957741105Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:06.960310 containerd[1584]: time="2025-07-07T06:03:06.960240716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:06.960764 containerd[1584]: time="2025-07-07T06:03:06.960701262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.332811237s" Jul 7 06:03:06.960764 containerd[1584]: time="2025-07-07T06:03:06.960746267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:03:06.962040 containerd[1584]: time="2025-07-07T06:03:06.961986318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:03:06.984868 containerd[1584]: time="2025-07-07T06:03:06.984779409Z" level=info msg="CreateContainer within sandbox \"6dbbca98d758604a5976cdecb66feff68e25185e441bde82e5448585eee7edee\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:03:06.997349 containerd[1584]: time="2025-07-07T06:03:06.997276660Z" level=info msg="Container d11a7b14b7fa32ae3af08dcf4b6c5341fd8a942c11de09f5a15af38ce61d8024: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:07.008308 containerd[1584]: time="2025-07-07T06:03:07.008244212Z" level=info msg="CreateContainer within sandbox \"6dbbca98d758604a5976cdecb66feff68e25185e441bde82e5448585eee7edee\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d11a7b14b7fa32ae3af08dcf4b6c5341fd8a942c11de09f5a15af38ce61d8024\"" Jul 7 06:03:07.008896 containerd[1584]: time="2025-07-07T06:03:07.008842467Z" level=info msg="StartContainer for \"d11a7b14b7fa32ae3af08dcf4b6c5341fd8a942c11de09f5a15af38ce61d8024\"" Jul 7 06:03:07.010151 containerd[1584]: time="2025-07-07T06:03:07.010124507Z" level=info msg="connecting to shim d11a7b14b7fa32ae3af08dcf4b6c5341fd8a942c11de09f5a15af38ce61d8024" address="unix:///run/containerd/s/c378c2a8640f44cc59e4b3b99397ebdd510b20e013669e3c8c4dade098f7136f" protocol=ttrpc version=3 Jul 7 06:03:07.044082 systemd[1]: Started cri-containerd-d11a7b14b7fa32ae3af08dcf4b6c5341fd8a942c11de09f5a15af38ce61d8024.scope - libcontainer container d11a7b14b7fa32ae3af08dcf4b6c5341fd8a942c11de09f5a15af38ce61d8024. Jul 7 06:03:07.097785 kubelet[2777]: E0707 06:03:07.097652 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:07.110415 containerd[1584]: time="2025-07-07T06:03:07.110332733Z" level=info msg="StartContainer for \"d11a7b14b7fa32ae3af08dcf4b6c5341fd8a942c11de09f5a15af38ce61d8024\" returns successfully" Jul 7 06:03:07.177170 kubelet[2777]: E0707 06:03:07.176845 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:07.194660 kubelet[2777]: I0707 06:03:07.194416 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66577b6f85-zfvbr" podStartSLOduration=0.860139707 podStartE2EDuration="4.194398557s" podCreationTimestamp="2025-07-07 06:03:03 +0000 UTC" firstStartedPulling="2025-07-07 06:03:03.627553421 +0000 UTC m=+18.655404493" lastFinishedPulling="2025-07-07 06:03:06.961812241 +0000 UTC m=+21.989663343" observedRunningTime="2025-07-07 06:03:07.194256902 +0000 UTC m=+22.222107964" watchObservedRunningTime="2025-07-07 06:03:07.194398557 +0000 UTC m=+22.222249629" Jul 7 06:03:07.217516 kubelet[2777]: E0707 06:03:07.217353 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.217516 kubelet[2777]: W0707 06:03:07.217384 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.217516 kubelet[2777]: E0707 06:03:07.217407 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.219390 kubelet[2777]: E0707 06:03:07.218847 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.219390 kubelet[2777]: W0707 06:03:07.218869 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.219390 kubelet[2777]: E0707 06:03:07.218882 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.219390 kubelet[2777]: E0707 06:03:07.219165 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.219390 kubelet[2777]: W0707 06:03:07.219173 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.219390 kubelet[2777]: E0707 06:03:07.219183 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.219630 kubelet[2777]: E0707 06:03:07.219485 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.219630 kubelet[2777]: W0707 06:03:07.219494 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.219630 kubelet[2777]: E0707 06:03:07.219504 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.220260 kubelet[2777]: E0707 06:03:07.220229 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.220260 kubelet[2777]: W0707 06:03:07.220246 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.220260 kubelet[2777]: E0707 06:03:07.220258 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.222634 kubelet[2777]: E0707 06:03:07.222598 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.222634 kubelet[2777]: W0707 06:03:07.222623 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.222725 kubelet[2777]: E0707 06:03:07.222638 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.223046 kubelet[2777]: E0707 06:03:07.223003 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.223046 kubelet[2777]: W0707 06:03:07.223026 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.223118 kubelet[2777]: E0707 06:03:07.223041 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.223382 kubelet[2777]: E0707 06:03:07.223342 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.223382 kubelet[2777]: W0707 06:03:07.223379 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.223464 kubelet[2777]: E0707 06:03:07.223390 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.225148 kubelet[2777]: E0707 06:03:07.225108 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.225148 kubelet[2777]: W0707 06:03:07.225139 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.225244 kubelet[2777]: E0707 06:03:07.225153 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.225425 kubelet[2777]: E0707 06:03:07.225373 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.225425 kubelet[2777]: W0707 06:03:07.225411 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.225425 kubelet[2777]: E0707 06:03:07.225422 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.225677 kubelet[2777]: E0707 06:03:07.225652 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.225677 kubelet[2777]: W0707 06:03:07.225670 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.225732 kubelet[2777]: E0707 06:03:07.225681 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.226468 kubelet[2777]: E0707 06:03:07.226433 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.226468 kubelet[2777]: W0707 06:03:07.226454 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.226468 kubelet[2777]: E0707 06:03:07.226466 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.229834 kubelet[2777]: E0707 06:03:07.228875 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.229834 kubelet[2777]: W0707 06:03:07.228906 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.229834 kubelet[2777]: E0707 06:03:07.228926 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.230043 kubelet[2777]: E0707 06:03:07.229933 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.230043 kubelet[2777]: W0707 06:03:07.229950 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.230043 kubelet[2777]: E0707 06:03:07.229964 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.230266 kubelet[2777]: E0707 06:03:07.230228 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.230266 kubelet[2777]: W0707 06:03:07.230256 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.230379 kubelet[2777]: E0707 06:03:07.230272 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.234940 kubelet[2777]: E0707 06:03:07.234888 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.234940 kubelet[2777]: W0707 06:03:07.234920 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.234940 kubelet[2777]: E0707 06:03:07.234943 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.235273 kubelet[2777]: E0707 06:03:07.235245 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.235273 kubelet[2777]: W0707 06:03:07.235265 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.235384 kubelet[2777]: E0707 06:03:07.235279 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.235690 kubelet[2777]: E0707 06:03:07.235662 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.235690 kubelet[2777]: W0707 06:03:07.235680 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.235690 kubelet[2777]: E0707 06:03:07.235691 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.237263 kubelet[2777]: E0707 06:03:07.237230 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.237263 kubelet[2777]: W0707 06:03:07.237249 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.237263 kubelet[2777]: E0707 06:03:07.237259 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.237918 kubelet[2777]: E0707 06:03:07.237889 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.237918 kubelet[2777]: W0707 06:03:07.237905 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.237918 kubelet[2777]: E0707 06:03:07.237915 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.240169 kubelet[2777]: E0707 06:03:07.240098 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.240169 kubelet[2777]: W0707 06:03:07.240120 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.240169 kubelet[2777]: E0707 06:03:07.240133 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.240629 kubelet[2777]: E0707 06:03:07.240599 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.240629 kubelet[2777]: W0707 06:03:07.240618 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.240629 kubelet[2777]: E0707 06:03:07.240630 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.240989 kubelet[2777]: E0707 06:03:07.240962 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.240989 kubelet[2777]: W0707 06:03:07.240977 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.240989 kubelet[2777]: E0707 06:03:07.240986 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.242814 kubelet[2777]: E0707 06:03:07.241173 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.242814 kubelet[2777]: W0707 06:03:07.241185 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.242814 kubelet[2777]: E0707 06:03:07.241194 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.242998 kubelet[2777]: E0707 06:03:07.242970 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.242998 kubelet[2777]: W0707 06:03:07.242993 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.243052 kubelet[2777]: E0707 06:03:07.243008 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.243303 kubelet[2777]: E0707 06:03:07.243277 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.243303 kubelet[2777]: W0707 06:03:07.243295 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.243381 kubelet[2777]: E0707 06:03:07.243307 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.243620 kubelet[2777]: E0707 06:03:07.243592 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.243620 kubelet[2777]: W0707 06:03:07.243611 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.243696 kubelet[2777]: E0707 06:03:07.243621 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.244535 kubelet[2777]: E0707 06:03:07.244505 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.244535 kubelet[2777]: W0707 06:03:07.244524 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.244535 kubelet[2777]: E0707 06:03:07.244536 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.245914 kubelet[2777]: E0707 06:03:07.245881 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.245914 kubelet[2777]: W0707 06:03:07.245904 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.245914 kubelet[2777]: E0707 06:03:07.245917 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.246234 kubelet[2777]: E0707 06:03:07.246205 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.246234 kubelet[2777]: W0707 06:03:07.246224 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.246234 kubelet[2777]: E0707 06:03:07.246237 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.246547 kubelet[2777]: E0707 06:03:07.246519 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.246547 kubelet[2777]: W0707 06:03:07.246538 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.246547 kubelet[2777]: E0707 06:03:07.246550 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.246886 kubelet[2777]: E0707 06:03:07.246856 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.246886 kubelet[2777]: W0707 06:03:07.246876 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.246955 kubelet[2777]: E0707 06:03:07.246889 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:07.247951 kubelet[2777]: E0707 06:03:07.247919 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:07.247951 kubelet[2777]: W0707 06:03:07.247941 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:07.247951 kubelet[2777]: E0707 06:03:07.247953 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.177410 kubelet[2777]: I0707 06:03:08.177341 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:03:08.177922 kubelet[2777]: E0707 06:03:08.177816 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:08.237071 kubelet[2777]: E0707 06:03:08.237001 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.237071 kubelet[2777]: W0707 06:03:08.237032 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.237071 kubelet[2777]: E0707 06:03:08.237059 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.237359 kubelet[2777]: E0707 06:03:08.237279 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.237359 kubelet[2777]: W0707 06:03:08.237316 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.237359 kubelet[2777]: E0707 06:03:08.237329 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.237643 kubelet[2777]: E0707 06:03:08.237614 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.237643 kubelet[2777]: W0707 06:03:08.237637 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.237831 kubelet[2777]: E0707 06:03:08.237684 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.238022 kubelet[2777]: E0707 06:03:08.237992 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.238022 kubelet[2777]: W0707 06:03:08.238008 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.238022 kubelet[2777]: E0707 06:03:08.238020 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.238251 kubelet[2777]: E0707 06:03:08.238225 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.238251 kubelet[2777]: W0707 06:03:08.238238 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.238251 kubelet[2777]: E0707 06:03:08.238248 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.238512 kubelet[2777]: E0707 06:03:08.238482 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.238512 kubelet[2777]: W0707 06:03:08.238498 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.238512 kubelet[2777]: E0707 06:03:08.238510 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.238745 kubelet[2777]: E0707 06:03:08.238713 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.238745 kubelet[2777]: W0707 06:03:08.238728 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.238745 kubelet[2777]: E0707 06:03:08.238738 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.239024 kubelet[2777]: E0707 06:03:08.238954 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.239024 kubelet[2777]: W0707 06:03:08.238963 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.239024 kubelet[2777]: E0707 06:03:08.238970 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.239286 kubelet[2777]: E0707 06:03:08.239258 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.239286 kubelet[2777]: W0707 06:03:08.239274 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.239394 kubelet[2777]: E0707 06:03:08.239286 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.239580 kubelet[2777]: E0707 06:03:08.239551 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.239580 kubelet[2777]: W0707 06:03:08.239564 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.239580 kubelet[2777]: E0707 06:03:08.239576 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.239832 kubelet[2777]: E0707 06:03:08.239813 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.239832 kubelet[2777]: W0707 06:03:08.239827 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.239923 kubelet[2777]: E0707 06:03:08.239838 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.240103 kubelet[2777]: E0707 06:03:08.240078 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.240103 kubelet[2777]: W0707 06:03:08.240091 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.240103 kubelet[2777]: E0707 06:03:08.240102 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.240385 kubelet[2777]: E0707 06:03:08.240342 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.240385 kubelet[2777]: W0707 06:03:08.240356 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.240385 kubelet[2777]: E0707 06:03:08.240368 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.240602 kubelet[2777]: E0707 06:03:08.240583 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.240602 kubelet[2777]: W0707 06:03:08.240598 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.240678 kubelet[2777]: E0707 06:03:08.240609 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.240857 kubelet[2777]: E0707 06:03:08.240839 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.240857 kubelet[2777]: W0707 06:03:08.240853 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.240938 kubelet[2777]: E0707 06:03:08.240864 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.243410 kubelet[2777]: E0707 06:03:08.243375 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.243410 kubelet[2777]: W0707 06:03:08.243392 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.243410 kubelet[2777]: E0707 06:03:08.243406 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.243765 kubelet[2777]: E0707 06:03:08.243734 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.243765 kubelet[2777]: W0707 06:03:08.243751 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.243765 kubelet[2777]: E0707 06:03:08.243764 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.244152 kubelet[2777]: E0707 06:03:08.244135 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.244152 kubelet[2777]: W0707 06:03:08.244149 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.244243 kubelet[2777]: E0707 06:03:08.244161 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.244479 kubelet[2777]: E0707 06:03:08.244450 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.244479 kubelet[2777]: W0707 06:03:08.244466 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.244544 kubelet[2777]: E0707 06:03:08.244480 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.244727 kubelet[2777]: E0707 06:03:08.244709 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.244727 kubelet[2777]: W0707 06:03:08.244723 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.244817 kubelet[2777]: E0707 06:03:08.244736 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.245004 kubelet[2777]: E0707 06:03:08.244988 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.245004 kubelet[2777]: W0707 06:03:08.245000 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.245075 kubelet[2777]: E0707 06:03:08.245010 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.245377 kubelet[2777]: E0707 06:03:08.245343 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.245377 kubelet[2777]: W0707 06:03:08.245371 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.245470 kubelet[2777]: E0707 06:03:08.245402 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.245752 kubelet[2777]: E0707 06:03:08.245732 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.245752 kubelet[2777]: W0707 06:03:08.245748 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.245855 kubelet[2777]: E0707 06:03:08.245767 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.246061 kubelet[2777]: E0707 06:03:08.246047 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.246061 kubelet[2777]: W0707 06:03:08.246059 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.246184 kubelet[2777]: E0707 06:03:08.246071 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.246335 kubelet[2777]: E0707 06:03:08.246318 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.246335 kubelet[2777]: W0707 06:03:08.246331 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.246409 kubelet[2777]: E0707 06:03:08.246343 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.246588 kubelet[2777]: E0707 06:03:08.246570 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.246588 kubelet[2777]: W0707 06:03:08.246583 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.246667 kubelet[2777]: E0707 06:03:08.246594 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.246888 kubelet[2777]: E0707 06:03:08.246872 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.246888 kubelet[2777]: W0707 06:03:08.246885 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.246956 kubelet[2777]: E0707 06:03:08.246895 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.247287 kubelet[2777]: E0707 06:03:08.247245 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.247287 kubelet[2777]: W0707 06:03:08.247271 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.247371 kubelet[2777]: E0707 06:03:08.247290 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.247535 kubelet[2777]: E0707 06:03:08.247505 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.247535 kubelet[2777]: W0707 06:03:08.247520 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.247535 kubelet[2777]: E0707 06:03:08.247531 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.247746 kubelet[2777]: E0707 06:03:08.247724 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.247746 kubelet[2777]: W0707 06:03:08.247740 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.247848 kubelet[2777]: E0707 06:03:08.247750 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.247997 kubelet[2777]: E0707 06:03:08.247976 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.247997 kubelet[2777]: W0707 06:03:08.247990 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.248058 kubelet[2777]: E0707 06:03:08.248000 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.248254 kubelet[2777]: E0707 06:03:08.248234 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.248254 kubelet[2777]: W0707 06:03:08.248247 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.248352 kubelet[2777]: E0707 06:03:08.248257 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:08.248733 kubelet[2777]: E0707 06:03:08.248710 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:03:08.248733 kubelet[2777]: W0707 06:03:08.248727 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:03:08.248825 kubelet[2777]: E0707 06:03:08.248739 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:03:09.097874 kubelet[2777]: E0707 06:03:09.097721 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:09.500400 containerd[1584]: time="2025-07-07T06:03:09.500317621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:09.501585 containerd[1584]: time="2025-07-07T06:03:09.501496246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:03:09.503837 containerd[1584]: time="2025-07-07T06:03:09.503477871Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:09.508479 containerd[1584]: time="2025-07-07T06:03:09.508418126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:09.509678 containerd[1584]: time="2025-07-07T06:03:09.509629663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.547605844s" Jul 7 06:03:09.509773 containerd[1584]: time="2025-07-07T06:03:09.509679106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:03:09.515190 containerd[1584]: time="2025-07-07T06:03:09.515135432Z" level=info msg="CreateContainer within sandbox \"14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:03:09.528063 containerd[1584]: time="2025-07-07T06:03:09.527987287Z" level=info msg="Container 9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:09.541367 containerd[1584]: time="2025-07-07T06:03:09.541119099Z" level=info msg="CreateContainer within sandbox \"14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff\"" Jul 7 06:03:09.542147 containerd[1584]: time="2025-07-07T06:03:09.542095214Z" level=info msg="StartContainer for \"9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff\"" Jul 7 06:03:09.544688 containerd[1584]: time="2025-07-07T06:03:09.544577580Z" level=info msg="connecting to shim 9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff" address="unix:///run/containerd/s/d956b0f50c33b0ee1631407241365b7b055001e4e74b2cf1422572e96f2cd644" protocol=ttrpc version=3 Jul 7 06:03:09.574086 systemd[1]: Started cri-containerd-9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff.scope - libcontainer container 9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff. Jul 7 06:03:09.635524 containerd[1584]: time="2025-07-07T06:03:09.635456602Z" level=info msg="StartContainer for \"9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff\" returns successfully" Jul 7 06:03:09.647004 systemd[1]: cri-containerd-9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff.scope: Deactivated successfully. Jul 7 06:03:09.647483 systemd[1]: cri-containerd-9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff.scope: Consumed 52ms CPU time, 6.4M memory peak, 4.2M written to disk. Jul 7 06:03:09.648718 containerd[1584]: time="2025-07-07T06:03:09.648681078Z" level=info msg="received exit event container_id:\"9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff\" id:\"9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff\" pid:3499 exited_at:{seconds:1751868189 nanos:648200305}" Jul 7 06:03:09.648850 containerd[1584]: time="2025-07-07T06:03:09.648817624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff\" id:\"9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff\" pid:3499 exited_at:{seconds:1751868189 nanos:648200305}" Jul 7 06:03:09.679245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9875311659dac9df2183b6ff653aac891b319717962dc03557488e51af028bff-rootfs.mount: Deactivated successfully. Jul 7 06:03:11.097050 kubelet[2777]: E0707 06:03:11.096952 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:11.189036 containerd[1584]: time="2025-07-07T06:03:11.188972086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:03:13.097438 kubelet[2777]: E0707 06:03:13.097355 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:15.097406 kubelet[2777]: E0707 06:03:15.097335 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:16.609858 containerd[1584]: time="2025-07-07T06:03:16.609778276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:16.648266 containerd[1584]: time="2025-07-07T06:03:16.648186899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:03:16.689481 containerd[1584]: time="2025-07-07T06:03:16.689409336Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:16.776919 containerd[1584]: time="2025-07-07T06:03:16.776709498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:16.783825 containerd[1584]: time="2025-07-07T06:03:16.781965078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.592282578s" Jul 7 06:03:16.783825 containerd[1584]: time="2025-07-07T06:03:16.782019370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:03:16.910703 containerd[1584]: time="2025-07-07T06:03:16.910537682Z" level=info msg="CreateContainer within sandbox \"14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:03:17.075444 containerd[1584]: time="2025-07-07T06:03:17.075369419Z" level=info msg="Container 6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:17.097473 kubelet[2777]: E0707 06:03:17.097370 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:17.104636 containerd[1584]: time="2025-07-07T06:03:17.104563120Z" level=info msg="CreateContainer within sandbox \"14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc\"" Jul 7 06:03:17.105442 containerd[1584]: time="2025-07-07T06:03:17.105339007Z" level=info msg="StartContainer for \"6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc\"" Jul 7 06:03:17.107095 containerd[1584]: time="2025-07-07T06:03:17.107012720Z" level=info msg="connecting to shim 6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc" address="unix:///run/containerd/s/d956b0f50c33b0ee1631407241365b7b055001e4e74b2cf1422572e96f2cd644" protocol=ttrpc version=3 Jul 7 06:03:17.132979 systemd[1]: Started cri-containerd-6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc.scope - libcontainer container 6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc. Jul 7 06:03:17.261097 containerd[1584]: time="2025-07-07T06:03:17.261029613Z" level=info msg="StartContainer for \"6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc\" returns successfully" Jul 7 06:03:18.432409 systemd[1]: cri-containerd-6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc.scope: Deactivated successfully. Jul 7 06:03:18.433536 systemd[1]: cri-containerd-6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc.scope: Consumed 724ms CPU time, 176.9M memory peak, 2.2M read from disk, 171.2M written to disk. Jul 7 06:03:18.434691 containerd[1584]: time="2025-07-07T06:03:18.434629285Z" level=info msg="received exit event container_id:\"6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc\" id:\"6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc\" pid:3559 exited_at:{seconds:1751868198 nanos:433198608}" Jul 7 06:03:18.435286 containerd[1584]: time="2025-07-07T06:03:18.434905232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc\" id:\"6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc\" pid:3559 exited_at:{seconds:1751868198 nanos:433198608}" Jul 7 06:03:18.467594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ed6447b181538fbf774441c9aa99b8b15f91f8a4e19341e2c168a1c052cddbc-rootfs.mount: Deactivated successfully. Jul 7 06:03:18.522637 kubelet[2777]: I0707 06:03:18.522592 2777 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:03:18.867351 systemd[1]: Created slice kubepods-besteffort-pod1598648b_004b_4f03_89d1_460edb2e0abb.slice - libcontainer container kubepods-besteffort-pod1598648b_004b_4f03_89d1_460edb2e0abb.slice. Jul 7 06:03:18.883290 systemd[1]: Created slice kubepods-besteffort-pod379858ba_c579_4b12_94c9_85bde143d2ef.slice - libcontainer container kubepods-besteffort-pod379858ba_c579_4b12_94c9_85bde143d2ef.slice. Jul 7 06:03:18.890582 systemd[1]: Created slice kubepods-besteffort-pode6339c2e_003c_4ec5_a6c2_fbcf59fbe45b.slice - libcontainer container kubepods-besteffort-pode6339c2e_003c_4ec5_a6c2_fbcf59fbe45b.slice. Jul 7 06:03:18.996063 systemd[1]: Created slice kubepods-besteffort-pod66efdfe5_825f_4812_bb63_e13ff3c25bc0.slice - libcontainer container kubepods-besteffort-pod66efdfe5_825f_4812_bb63_e13ff3c25bc0.slice. Jul 7 06:03:19.049177 kubelet[2777]: I0707 06:03:19.049116 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gbff\" (UniqueName: \"kubernetes.io/projected/379858ba-c579-4b12-94c9-85bde143d2ef-kube-api-access-2gbff\") pod \"calico-kube-controllers-655467f6dd-ps8wv\" (UID: \"379858ba-c579-4b12-94c9-85bde143d2ef\") " pod="calico-system/calico-kube-controllers-655467f6dd-ps8wv" Jul 7 06:03:19.049448 kubelet[2777]: I0707 06:03:19.049259 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtpm\" (UniqueName: \"kubernetes.io/projected/1598648b-004b-4f03-89d1-460edb2e0abb-kube-api-access-vvtpm\") pod \"whisker-6c976b56fd-zthzp\" (UID: \"1598648b-004b-4f03-89d1-460edb2e0abb\") " pod="calico-system/whisker-6c976b56fd-zthzp" Jul 7 06:03:19.049448 kubelet[2777]: I0707 06:03:19.049438 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b-calico-apiserver-certs\") pod \"calico-apiserver-86dc8db98-qlhjq\" (UID: \"e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b\") " pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" Jul 7 06:03:19.049503 kubelet[2777]: I0707 06:03:19.049473 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/379858ba-c579-4b12-94c9-85bde143d2ef-tigera-ca-bundle\") pod \"calico-kube-controllers-655467f6dd-ps8wv\" (UID: \"379858ba-c579-4b12-94c9-85bde143d2ef\") " pod="calico-system/calico-kube-controllers-655467f6dd-ps8wv" Jul 7 06:03:19.049534 kubelet[2777]: I0707 06:03:19.049516 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-ca-bundle\") pod \"whisker-6c976b56fd-zthzp\" (UID: \"1598648b-004b-4f03-89d1-460edb2e0abb\") " pod="calico-system/whisker-6c976b56fd-zthzp" Jul 7 06:03:19.049562 kubelet[2777]: I0707 06:03:19.049537 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6x2b\" (UniqueName: \"kubernetes.io/projected/e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b-kube-api-access-d6x2b\") pod \"calico-apiserver-86dc8db98-qlhjq\" (UID: \"e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b\") " pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" Jul 7 06:03:19.049601 kubelet[2777]: I0707 06:03:19.049578 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-backend-key-pair\") pod \"whisker-6c976b56fd-zthzp\" (UID: \"1598648b-004b-4f03-89d1-460edb2e0abb\") " pod="calico-system/whisker-6c976b56fd-zthzp" Jul 7 06:03:19.140370 systemd[1]: Created slice kubepods-burstable-pod43892931_e098_4310_a41c_4bce294d590b.slice - libcontainer container kubepods-burstable-pod43892931_e098_4310_a41c_4bce294d590b.slice. Jul 7 06:03:19.150011 kubelet[2777]: I0707 06:03:19.149940 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/66efdfe5-825f-4812-bb63-e13ff3c25bc0-calico-apiserver-certs\") pod \"calico-apiserver-86dc8db98-4cxtw\" (UID: \"66efdfe5-825f-4812-bb63-e13ff3c25bc0\") " pod="calico-apiserver/calico-apiserver-86dc8db98-4cxtw" Jul 7 06:03:19.150011 kubelet[2777]: I0707 06:03:19.149984 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4rx\" (UniqueName: \"kubernetes.io/projected/66efdfe5-825f-4812-bb63-e13ff3c25bc0-kube-api-access-xz4rx\") pod \"calico-apiserver-86dc8db98-4cxtw\" (UID: \"66efdfe5-825f-4812-bb63-e13ff3c25bc0\") " pod="calico-apiserver/calico-apiserver-86dc8db98-4cxtw" Jul 7 06:03:19.155400 systemd[1]: Created slice kubepods-besteffort-poddde14f30_8111_4778_8695_ad893871cc92.slice - libcontainer container kubepods-besteffort-poddde14f30_8111_4778_8695_ad893871cc92.slice. Jul 7 06:03:19.175607 containerd[1584]: time="2025-07-07T06:03:19.175003555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrhmt,Uid:dde14f30-8111-4778-8695-ad893871cc92,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:19.192649 containerd[1584]: time="2025-07-07T06:03:19.190712699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655467f6dd-ps8wv,Uid:379858ba-c579-4b12-94c9-85bde143d2ef,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:19.196032 containerd[1584]: time="2025-07-07T06:03:19.194418097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-qlhjq,Uid:e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:03:19.201426 systemd[1]: Created slice kubepods-burstable-podef322faf_a52c_4115_8aa2_191cd5a2ce8f.slice - libcontainer container kubepods-burstable-podef322faf_a52c_4115_8aa2_191cd5a2ce8f.slice. Jul 7 06:03:19.216248 systemd[1]: Created slice kubepods-besteffort-podc1d8df6e_c02f_4b07_9613_354af2a59f1e.slice - libcontainer container kubepods-besteffort-podc1d8df6e_c02f_4b07_9613_354af2a59f1e.slice. Jul 7 06:03:19.250571 kubelet[2777]: I0707 06:03:19.250492 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef322faf-a52c-4115-8aa2-191cd5a2ce8f-config-volume\") pod \"coredns-674b8bbfcf-5tvbp\" (UID: \"ef322faf-a52c-4115-8aa2-191cd5a2ce8f\") " pod="kube-system/coredns-674b8bbfcf-5tvbp" Jul 7 06:03:19.250571 kubelet[2777]: I0707 06:03:19.250575 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgfql\" (UniqueName: \"kubernetes.io/projected/ef322faf-a52c-4115-8aa2-191cd5a2ce8f-kube-api-access-lgfql\") pod \"coredns-674b8bbfcf-5tvbp\" (UID: \"ef322faf-a52c-4115-8aa2-191cd5a2ce8f\") " pod="kube-system/coredns-674b8bbfcf-5tvbp" Jul 7 06:03:19.250823 kubelet[2777]: I0707 06:03:19.250608 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43892931-e098-4310-a41c-4bce294d590b-config-volume\") pod \"coredns-674b8bbfcf-mgc94\" (UID: \"43892931-e098-4310-a41c-4bce294d590b\") " pod="kube-system/coredns-674b8bbfcf-mgc94" Jul 7 06:03:19.250823 kubelet[2777]: I0707 06:03:19.250630 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c1d8df6e-c02f-4b07-9613-354af2a59f1e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-ps9vd\" (UID: \"c1d8df6e-c02f-4b07-9613-354af2a59f1e\") " pod="calico-system/goldmane-768f4c5c69-ps9vd" Jul 7 06:03:19.250823 kubelet[2777]: I0707 06:03:19.250649 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpsq\" (UniqueName: \"kubernetes.io/projected/43892931-e098-4310-a41c-4bce294d590b-kube-api-access-vgpsq\") pod \"coredns-674b8bbfcf-mgc94\" (UID: \"43892931-e098-4310-a41c-4bce294d590b\") " pod="kube-system/coredns-674b8bbfcf-mgc94" Jul 7 06:03:19.250823 kubelet[2777]: I0707 06:03:19.250676 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d8df6e-c02f-4b07-9613-354af2a59f1e-config\") pod \"goldmane-768f4c5c69-ps9vd\" (UID: \"c1d8df6e-c02f-4b07-9613-354af2a59f1e\") " pod="calico-system/goldmane-768f4c5c69-ps9vd" Jul 7 06:03:19.250823 kubelet[2777]: I0707 06:03:19.250695 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d8df6e-c02f-4b07-9613-354af2a59f1e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-ps9vd\" (UID: \"c1d8df6e-c02f-4b07-9613-354af2a59f1e\") " pod="calico-system/goldmane-768f4c5c69-ps9vd" Jul 7 06:03:19.250995 kubelet[2777]: I0707 06:03:19.250711 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs44q\" (UniqueName: \"kubernetes.io/projected/c1d8df6e-c02f-4b07-9613-354af2a59f1e-kube-api-access-rs44q\") pod \"goldmane-768f4c5c69-ps9vd\" (UID: \"c1d8df6e-c02f-4b07-9613-354af2a59f1e\") " pod="calico-system/goldmane-768f4c5c69-ps9vd" Jul 7 06:03:19.276454 containerd[1584]: time="2025-07-07T06:03:19.276411865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:03:19.347976 containerd[1584]: time="2025-07-07T06:03:19.347623499Z" level=error msg="Failed to destroy network for sandbox \"eb2d7f1efaf8042c717e3934d0a1b0d66c874cbc3c6394d25f7e2d5e9700211d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.369430 containerd[1584]: time="2025-07-07T06:03:19.368979055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrhmt,Uid:dde14f30-8111-4778-8695-ad893871cc92,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb2d7f1efaf8042c717e3934d0a1b0d66c874cbc3c6394d25f7e2d5e9700211d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.423709 kubelet[2777]: E0707 06:03:19.423523 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb2d7f1efaf8042c717e3934d0a1b0d66c874cbc3c6394d25f7e2d5e9700211d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.423709 kubelet[2777]: E0707 06:03:19.423625 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb2d7f1efaf8042c717e3934d0a1b0d66c874cbc3c6394d25f7e2d5e9700211d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrhmt" Jul 7 06:03:19.423709 kubelet[2777]: E0707 06:03:19.423651 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb2d7f1efaf8042c717e3934d0a1b0d66c874cbc3c6394d25f7e2d5e9700211d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrhmt" Jul 7 06:03:19.424020 kubelet[2777]: E0707 06:03:19.423719 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hrhmt_calico-system(dde14f30-8111-4778-8695-ad893871cc92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hrhmt_calico-system(dde14f30-8111-4778-8695-ad893871cc92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb2d7f1efaf8042c717e3934d0a1b0d66c874cbc3c6394d25f7e2d5e9700211d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hrhmt" podUID="dde14f30-8111-4778-8695-ad893871cc92" Jul 7 06:03:19.447426 kubelet[2777]: E0707 06:03:19.447345 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:19.450891 containerd[1584]: time="2025-07-07T06:03:19.449399830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mgc94,Uid:43892931-e098-4310-a41c-4bce294d590b,Namespace:kube-system,Attempt:0,}" Jul 7 06:03:19.471829 containerd[1584]: time="2025-07-07T06:03:19.470120614Z" level=error msg="Failed to destroy network for sandbox \"496ff885a3fb69949f57614fa3b34e43bc173105f4d7782dfbd7fbde412431a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.477124 containerd[1584]: time="2025-07-07T06:03:19.477024807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c976b56fd-zthzp,Uid:1598648b-004b-4f03-89d1-460edb2e0abb,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:19.485507 systemd[1]: run-netns-cni\x2d1dc80742\x2d4f07\x2d390b\x2d9e60\x2daa6b72265105.mount: Deactivated successfully. Jul 7 06:03:19.491408 systemd[1]: run-netns-cni\x2dfae921f2\x2dea0a\x2dc7c9\x2de0de\x2d49f6ee3f5f50.mount: Deactivated successfully. Jul 7 06:03:19.493254 containerd[1584]: time="2025-07-07T06:03:19.493185440Z" level=error msg="Failed to destroy network for sandbox \"0b0e54a52980686026cb38ea64e514f1d2b73693a6a3d68c1e0b2a011456e51e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.497202 systemd[1]: run-netns-cni\x2dd3cdad64\x2d28a0\x2da0e9\x2dfd5a\x2d087e1a545ab9.mount: Deactivated successfully. Jul 7 06:03:19.511990 kubelet[2777]: E0707 06:03:19.511913 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:19.513829 containerd[1584]: time="2025-07-07T06:03:19.513255593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5tvbp,Uid:ef322faf-a52c-4115-8aa2-191cd5a2ce8f,Namespace:kube-system,Attempt:0,}" Jul 7 06:03:19.522446 containerd[1584]: time="2025-07-07T06:03:19.522392157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ps9vd,Uid:c1d8df6e-c02f-4b07-9613-354af2a59f1e,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:19.524766 containerd[1584]: time="2025-07-07T06:03:19.524709017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655467f6dd-ps8wv,Uid:379858ba-c579-4b12-94c9-85bde143d2ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"496ff885a3fb69949f57614fa3b34e43bc173105f4d7782dfbd7fbde412431a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.525305 kubelet[2777]: E0707 06:03:19.525226 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496ff885a3fb69949f57614fa3b34e43bc173105f4d7782dfbd7fbde412431a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.526138 kubelet[2777]: E0707 06:03:19.525361 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496ff885a3fb69949f57614fa3b34e43bc173105f4d7782dfbd7fbde412431a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655467f6dd-ps8wv" Jul 7 06:03:19.526138 kubelet[2777]: E0707 06:03:19.525392 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496ff885a3fb69949f57614fa3b34e43bc173105f4d7782dfbd7fbde412431a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655467f6dd-ps8wv" Jul 7 06:03:19.526138 kubelet[2777]: E0707 06:03:19.525481 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-655467f6dd-ps8wv_calico-system(379858ba-c579-4b12-94c9-85bde143d2ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-655467f6dd-ps8wv_calico-system(379858ba-c579-4b12-94c9-85bde143d2ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"496ff885a3fb69949f57614fa3b34e43bc173105f4d7782dfbd7fbde412431a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-655467f6dd-ps8wv" podUID="379858ba-c579-4b12-94c9-85bde143d2ef" Jul 7 06:03:19.546432 containerd[1584]: time="2025-07-07T06:03:19.546336624Z" level=error msg="Failed to destroy network for sandbox \"00281393c5dff75bb9a55524e78f230622ad5bc53c6f1c87ad03744f65337c39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.549735 systemd[1]: run-netns-cni\x2d8f4620d6\x2d2091\x2d4ce6\x2d33ac\x2d1880380cf61a.mount: Deactivated successfully. Jul 7 06:03:19.606511 containerd[1584]: time="2025-07-07T06:03:19.606397312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-qlhjq,Uid:e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b0e54a52980686026cb38ea64e514f1d2b73693a6a3d68c1e0b2a011456e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.606880 kubelet[2777]: E0707 06:03:19.606833 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b0e54a52980686026cb38ea64e514f1d2b73693a6a3d68c1e0b2a011456e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.606965 kubelet[2777]: E0707 06:03:19.606915 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b0e54a52980686026cb38ea64e514f1d2b73693a6a3d68c1e0b2a011456e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" Jul 7 06:03:19.607027 kubelet[2777]: E0707 06:03:19.606946 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b0e54a52980686026cb38ea64e514f1d2b73693a6a3d68c1e0b2a011456e51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" Jul 7 06:03:19.607075 kubelet[2777]: E0707 06:03:19.607033 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86dc8db98-qlhjq_calico-apiserver(e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86dc8db98-qlhjq_calico-apiserver(e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b0e54a52980686026cb38ea64e514f1d2b73693a6a3d68c1e0b2a011456e51e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" podUID="e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b" Jul 7 06:03:19.607145 containerd[1584]: time="2025-07-07T06:03:19.607040019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-4cxtw,Uid:66efdfe5-825f-4812-bb63-e13ff3c25bc0,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:03:19.858075 containerd[1584]: time="2025-07-07T06:03:19.857941181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mgc94,Uid:43892931-e098-4310-a41c-4bce294d590b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00281393c5dff75bb9a55524e78f230622ad5bc53c6f1c87ad03744f65337c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.858508 kubelet[2777]: E0707 06:03:19.858438 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00281393c5dff75bb9a55524e78f230622ad5bc53c6f1c87ad03744f65337c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:19.858581 kubelet[2777]: E0707 06:03:19.858530 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00281393c5dff75bb9a55524e78f230622ad5bc53c6f1c87ad03744f65337c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mgc94" Jul 7 06:03:19.858581 kubelet[2777]: E0707 06:03:19.858559 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00281393c5dff75bb9a55524e78f230622ad5bc53c6f1c87ad03744f65337c39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mgc94" Jul 7 06:03:19.858708 kubelet[2777]: E0707 06:03:19.858663 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mgc94_kube-system(43892931-e098-4310-a41c-4bce294d590b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mgc94_kube-system(43892931-e098-4310-a41c-4bce294d590b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00281393c5dff75bb9a55524e78f230622ad5bc53c6f1c87ad03744f65337c39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mgc94" podUID="43892931-e098-4310-a41c-4bce294d590b" Jul 7 06:03:19.963619 containerd[1584]: time="2025-07-07T06:03:19.963552402Z" level=error msg="Failed to destroy network for sandbox \"ec83ec2a55fa1ae3483e2200ceef187749aed9e9f1563913faa3d5c067df00ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.330715 containerd[1584]: time="2025-07-07T06:03:20.330651366Z" level=error msg="Failed to destroy network for sandbox \"9baafe0069df6fbe223cda3354df2dc34650da6e030fe5c2c9a50dd337e39798\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.459581 containerd[1584]: time="2025-07-07T06:03:20.459451799Z" level=error msg="Failed to destroy network for sandbox \"cbf11378cf95fb5c8eed5345adbadf4e381037f78165eb4234d13f68b94d61d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.467731 systemd[1]: run-netns-cni\x2dc3ec49c3\x2df13c\x2dbe6b\x2d6924\x2dd280935d04c3.mount: Deactivated successfully. Jul 7 06:03:20.572268 containerd[1584]: time="2025-07-07T06:03:20.572189495Z" level=error msg="Failed to destroy network for sandbox \"88e153f7b646f6462aa3d99390ac5c7b5c5f453b9c3fac99892d4808e22cd048\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.574672 systemd[1]: run-netns-cni\x2d693b00d1\x2d8387\x2d500e\x2d3b08\x2d781cc7a0148d.mount: Deactivated successfully. Jul 7 06:03:20.637844 containerd[1584]: time="2025-07-07T06:03:20.637616089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c976b56fd-zthzp,Uid:1598648b-004b-4f03-89d1-460edb2e0abb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec83ec2a55fa1ae3483e2200ceef187749aed9e9f1563913faa3d5c067df00ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.638090 kubelet[2777]: E0707 06:03:20.638019 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec83ec2a55fa1ae3483e2200ceef187749aed9e9f1563913faa3d5c067df00ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.638596 kubelet[2777]: E0707 06:03:20.638122 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec83ec2a55fa1ae3483e2200ceef187749aed9e9f1563913faa3d5c067df00ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c976b56fd-zthzp" Jul 7 06:03:20.638596 kubelet[2777]: E0707 06:03:20.638152 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec83ec2a55fa1ae3483e2200ceef187749aed9e9f1563913faa3d5c067df00ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c976b56fd-zthzp" Jul 7 06:03:20.638596 kubelet[2777]: E0707 06:03:20.638234 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c976b56fd-zthzp_calico-system(1598648b-004b-4f03-89d1-460edb2e0abb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c976b56fd-zthzp_calico-system(1598648b-004b-4f03-89d1-460edb2e0abb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec83ec2a55fa1ae3483e2200ceef187749aed9e9f1563913faa3d5c067df00ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c976b56fd-zthzp" podUID="1598648b-004b-4f03-89d1-460edb2e0abb" Jul 7 06:03:20.870603 containerd[1584]: time="2025-07-07T06:03:20.870516271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5tvbp,Uid:ef322faf-a52c-4115-8aa2-191cd5a2ce8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baafe0069df6fbe223cda3354df2dc34650da6e030fe5c2c9a50dd337e39798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.870880 kubelet[2777]: E0707 06:03:20.870808 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baafe0069df6fbe223cda3354df2dc34650da6e030fe5c2c9a50dd337e39798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.870880 kubelet[2777]: E0707 06:03:20.870873 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baafe0069df6fbe223cda3354df2dc34650da6e030fe5c2c9a50dd337e39798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5tvbp" Jul 7 06:03:20.871004 kubelet[2777]: E0707 06:03:20.870893 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9baafe0069df6fbe223cda3354df2dc34650da6e030fe5c2c9a50dd337e39798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5tvbp" Jul 7 06:03:20.871004 kubelet[2777]: E0707 06:03:20.870944 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5tvbp_kube-system(ef322faf-a52c-4115-8aa2-191cd5a2ce8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5tvbp_kube-system(ef322faf-a52c-4115-8aa2-191cd5a2ce8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9baafe0069df6fbe223cda3354df2dc34650da6e030fe5c2c9a50dd337e39798\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5tvbp" podUID="ef322faf-a52c-4115-8aa2-191cd5a2ce8f" Jul 7 06:03:20.917438 containerd[1584]: time="2025-07-07T06:03:20.917253777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ps9vd,Uid:c1d8df6e-c02f-4b07-9613-354af2a59f1e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf11378cf95fb5c8eed5345adbadf4e381037f78165eb4234d13f68b94d61d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.917612 kubelet[2777]: E0707 06:03:20.917574 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf11378cf95fb5c8eed5345adbadf4e381037f78165eb4234d13f68b94d61d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.917685 kubelet[2777]: E0707 06:03:20.917656 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf11378cf95fb5c8eed5345adbadf4e381037f78165eb4234d13f68b94d61d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-ps9vd" Jul 7 06:03:20.917740 kubelet[2777]: E0707 06:03:20.917685 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf11378cf95fb5c8eed5345adbadf4e381037f78165eb4234d13f68b94d61d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-ps9vd" Jul 7 06:03:20.917830 kubelet[2777]: E0707 06:03:20.917754 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-ps9vd_calico-system(c1d8df6e-c02f-4b07-9613-354af2a59f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-ps9vd_calico-system(c1d8df6e-c02f-4b07-9613-354af2a59f1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbf11378cf95fb5c8eed5345adbadf4e381037f78165eb4234d13f68b94d61d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-ps9vd" podUID="c1d8df6e-c02f-4b07-9613-354af2a59f1e" Jul 7 06:03:20.920960 containerd[1584]: time="2025-07-07T06:03:20.920839138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-4cxtw,Uid:66efdfe5-825f-4812-bb63-e13ff3c25bc0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e153f7b646f6462aa3d99390ac5c7b5c5f453b9c3fac99892d4808e22cd048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.921579 kubelet[2777]: E0707 06:03:20.921079 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e153f7b646f6462aa3d99390ac5c7b5c5f453b9c3fac99892d4808e22cd048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:20.921579 kubelet[2777]: E0707 06:03:20.921125 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e153f7b646f6462aa3d99390ac5c7b5c5f453b9c3fac99892d4808e22cd048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86dc8db98-4cxtw" Jul 7 06:03:20.921579 kubelet[2777]: E0707 06:03:20.921146 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88e153f7b646f6462aa3d99390ac5c7b5c5f453b9c3fac99892d4808e22cd048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86dc8db98-4cxtw" Jul 7 06:03:20.921706 kubelet[2777]: E0707 06:03:20.921189 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86dc8db98-4cxtw_calico-apiserver(66efdfe5-825f-4812-bb63-e13ff3c25bc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86dc8db98-4cxtw_calico-apiserver(66efdfe5-825f-4812-bb63-e13ff3c25bc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88e153f7b646f6462aa3d99390ac5c7b5c5f453b9c3fac99892d4808e22cd048\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86dc8db98-4cxtw" podUID="66efdfe5-825f-4812-bb63-e13ff3c25bc0" Jul 7 06:03:21.337077 kubelet[2777]: I0707 06:03:21.336984 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:03:21.337483 kubelet[2777]: E0707 06:03:21.337459 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:22.277911 kubelet[2777]: E0707 06:03:22.277846 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:29.147804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109414291.mount: Deactivated successfully. Jul 7 06:03:30.707727 containerd[1584]: time="2025-07-07T06:03:30.707017795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-qlhjq,Uid:e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:03:30.731181 containerd[1584]: time="2025-07-07T06:03:30.730887104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:30.734925 containerd[1584]: time="2025-07-07T06:03:30.734866786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:03:30.739258 containerd[1584]: time="2025-07-07T06:03:30.739195861Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:30.777699 containerd[1584]: time="2025-07-07T06:03:30.776767673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:30.777699 containerd[1584]: time="2025-07-07T06:03:30.777535022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.500860513s" Jul 7 06:03:30.777699 containerd[1584]: time="2025-07-07T06:03:30.777583697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:03:30.792197 containerd[1584]: time="2025-07-07T06:03:30.792112853Z" level=error msg="Failed to destroy network for sandbox \"8ce6275cc4556ce5ab51f1a1080989f5bb9074c72a5bd7b6e81b606c041b8e51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:30.798916 systemd[1]: run-netns-cni\x2dab19e98b\x2d2a61\x2dc5bc\x2d7b7c\x2d15bbda38e71d.mount: Deactivated successfully. Jul 7 06:03:30.827384 containerd[1584]: time="2025-07-07T06:03:30.827301381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-qlhjq,Uid:e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce6275cc4556ce5ab51f1a1080989f5bb9074c72a5bd7b6e81b606c041b8e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:30.827766 kubelet[2777]: E0707 06:03:30.827704 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce6275cc4556ce5ab51f1a1080989f5bb9074c72a5bd7b6e81b606c041b8e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:30.828329 kubelet[2777]: E0707 06:03:30.827820 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce6275cc4556ce5ab51f1a1080989f5bb9074c72a5bd7b6e81b606c041b8e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" Jul 7 06:03:30.828329 kubelet[2777]: E0707 06:03:30.827851 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce6275cc4556ce5ab51f1a1080989f5bb9074c72a5bd7b6e81b606c041b8e51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" Jul 7 06:03:30.828329 kubelet[2777]: E0707 06:03:30.827921 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86dc8db98-qlhjq_calico-apiserver(e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86dc8db98-qlhjq_calico-apiserver(e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ce6275cc4556ce5ab51f1a1080989f5bb9074c72a5bd7b6e81b606c041b8e51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" podUID="e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b" Jul 7 06:03:30.836004 containerd[1584]: time="2025-07-07T06:03:30.835927468Z" level=info msg="CreateContainer within sandbox \"14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:03:31.101811 kubelet[2777]: E0707 06:03:31.101332 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:31.101962 containerd[1584]: time="2025-07-07T06:03:31.101615789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mgc94,Uid:43892931-e098-4310-a41c-4bce294d590b,Namespace:kube-system,Attempt:0,}" Jul 7 06:03:31.138839 containerd[1584]: time="2025-07-07T06:03:31.136135619Z" level=info msg="Container 74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:31.391070 containerd[1584]: time="2025-07-07T06:03:31.390930829Z" level=info msg="CreateContainer within sandbox \"14896a5a6e4ff3c5aef74c660e9bb68766de86104b603a64290006e1ecda9f5e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\"" Jul 7 06:03:31.391565 containerd[1584]: time="2025-07-07T06:03:31.391511277Z" level=info msg="StartContainer for \"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\"" Jul 7 06:03:31.396378 containerd[1584]: time="2025-07-07T06:03:31.396344056Z" level=info msg="connecting to shim 74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb" address="unix:///run/containerd/s/d956b0f50c33b0ee1631407241365b7b055001e4e74b2cf1422572e96f2cd644" protocol=ttrpc version=3 Jul 7 06:03:31.434260 containerd[1584]: time="2025-07-07T06:03:31.434192748Z" level=error msg="Failed to destroy network for sandbox \"959e83c2a2bb3deb67706acb80e97a00372a30975bcb16575f4abb8fc3019a20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:31.438462 containerd[1584]: time="2025-07-07T06:03:31.436906525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mgc94,Uid:43892931-e098-4310-a41c-4bce294d590b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"959e83c2a2bb3deb67706acb80e97a00372a30975bcb16575f4abb8fc3019a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:31.438611 kubelet[2777]: E0707 06:03:31.437222 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959e83c2a2bb3deb67706acb80e97a00372a30975bcb16575f4abb8fc3019a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:03:31.438611 kubelet[2777]: E0707 06:03:31.437318 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959e83c2a2bb3deb67706acb80e97a00372a30975bcb16575f4abb8fc3019a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mgc94" Jul 7 06:03:31.438611 kubelet[2777]: E0707 06:03:31.437362 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959e83c2a2bb3deb67706acb80e97a00372a30975bcb16575f4abb8fc3019a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mgc94" Jul 7 06:03:31.438745 kubelet[2777]: E0707 06:03:31.437425 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mgc94_kube-system(43892931-e098-4310-a41c-4bce294d590b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mgc94_kube-system(43892931-e098-4310-a41c-4bce294d590b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"959e83c2a2bb3deb67706acb80e97a00372a30975bcb16575f4abb8fc3019a20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mgc94" podUID="43892931-e098-4310-a41c-4bce294d590b" Jul 7 06:03:31.499087 systemd[1]: Started cri-containerd-74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb.scope - libcontainer container 74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb. Jul 7 06:03:31.563503 containerd[1584]: time="2025-07-07T06:03:31.563418414Z" level=info msg="StartContainer for \"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\" returns successfully" Jul 7 06:03:31.687332 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:03:31.688574 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:03:31.859843 kubelet[2777]: I0707 06:03:31.859750 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vxh67" podStartSLOduration=2.04828524 podStartE2EDuration="28.859729418s" podCreationTimestamp="2025-07-07 06:03:03 +0000 UTC" firstStartedPulling="2025-07-07 06:03:03.969570812 +0000 UTC m=+18.997421884" lastFinishedPulling="2025-07-07 06:03:30.78101499 +0000 UTC m=+45.808866062" observedRunningTime="2025-07-07 06:03:31.85852549 +0000 UTC m=+46.886376562" watchObservedRunningTime="2025-07-07 06:03:31.859729418 +0000 UTC m=+46.887580490" Jul 7 06:03:31.928454 containerd[1584]: time="2025-07-07T06:03:31.928371833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\" id:\"692c2a5f66c6ef4d5a789b959504f0c94e094c4fcc0049f51689169521e86756\" pid:3991 exit_status:1 exited_at:{seconds:1751868211 nanos:927911867}" Jul 7 06:03:32.098372 kubelet[2777]: E0707 06:03:32.097574 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:32.098524 containerd[1584]: time="2025-07-07T06:03:32.098427908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5tvbp,Uid:ef322faf-a52c-4115-8aa2-191cd5a2ce8f,Namespace:kube-system,Attempt:0,}" Jul 7 06:03:32.098974 containerd[1584]: time="2025-07-07T06:03:32.098872644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrhmt,Uid:dde14f30-8111-4778-8695-ad893871cc92,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:32.106341 kubelet[2777]: I0707 06:03:32.106240 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-backend-key-pair\") pod \"1598648b-004b-4f03-89d1-460edb2e0abb\" (UID: \"1598648b-004b-4f03-89d1-460edb2e0abb\") " Jul 7 06:03:32.106341 kubelet[2777]: I0707 06:03:32.106317 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-ca-bundle\") pod \"1598648b-004b-4f03-89d1-460edb2e0abb\" (UID: \"1598648b-004b-4f03-89d1-460edb2e0abb\") " Jul 7 06:03:32.106764 kubelet[2777]: I0707 06:03:32.106358 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvtpm\" (UniqueName: \"kubernetes.io/projected/1598648b-004b-4f03-89d1-460edb2e0abb-kube-api-access-vvtpm\") pod \"1598648b-004b-4f03-89d1-460edb2e0abb\" (UID: \"1598648b-004b-4f03-89d1-460edb2e0abb\") " Jul 7 06:03:32.109977 kubelet[2777]: I0707 06:03:32.109910 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1598648b-004b-4f03-89d1-460edb2e0abb" (UID: "1598648b-004b-4f03-89d1-460edb2e0abb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:03:32.115708 kubelet[2777]: I0707 06:03:32.115635 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1598648b-004b-4f03-89d1-460edb2e0abb-kube-api-access-vvtpm" (OuterVolumeSpecName: "kube-api-access-vvtpm") pod "1598648b-004b-4f03-89d1-460edb2e0abb" (UID: "1598648b-004b-4f03-89d1-460edb2e0abb"). InnerVolumeSpecName "kube-api-access-vvtpm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:03:32.116818 kubelet[2777]: I0707 06:03:32.115913 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1598648b-004b-4f03-89d1-460edb2e0abb" (UID: "1598648b-004b-4f03-89d1-460edb2e0abb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:03:32.124739 systemd[1]: var-lib-kubelet-pods-1598648b\x2d004b\x2d4f03\x2d89d1\x2d460edb2e0abb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvvtpm.mount: Deactivated successfully. Jul 7 06:03:32.124913 systemd[1]: var-lib-kubelet-pods-1598648b\x2d004b\x2d4f03\x2d89d1\x2d460edb2e0abb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:03:32.207207 kubelet[2777]: I0707 06:03:32.207144 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:03:32.207207 kubelet[2777]: I0707 06:03:32.207197 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vvtpm\" (UniqueName: \"kubernetes.io/projected/1598648b-004b-4f03-89d1-460edb2e0abb-kube-api-access-vvtpm\") on node \"localhost\" DevicePath \"\"" Jul 7 06:03:32.207207 kubelet[2777]: I0707 06:03:32.207212 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1598648b-004b-4f03-89d1-460edb2e0abb-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:03:32.367162 systemd-networkd[1503]: calia3ddc9234f9: Link UP Jul 7 06:03:32.367948 systemd-networkd[1503]: calia3ddc9234f9: Gained carrier Jul 7 06:03:32.385849 containerd[1584]: 2025-07-07 06:03:32.149 [INFO][4028] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:03:32.385849 containerd[1584]: 2025-07-07 06:03:32.173 [INFO][4028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hrhmt-eth0 csi-node-driver- calico-system dde14f30-8111-4778-8695-ad893871cc92 720 0 2025-07-07 06:03:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hrhmt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia3ddc9234f9 [] [] }} ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-" Jul 7 06:03:32.385849 containerd[1584]: 2025-07-07 06:03:32.173 [INFO][4028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-eth0" Jul 7 06:03:32.385849 containerd[1584]: 2025-07-07 06:03:32.291 [INFO][4047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" HandleID="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Workload="localhost-k8s-csi--node--driver--hrhmt-eth0" Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.293 [INFO][4047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" HandleID="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Workload="localhost-k8s-csi--node--driver--hrhmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b5630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hrhmt", "timestamp":"2025-07-07 06:03:32.291366363 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.293 [INFO][4047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.293 [INFO][4047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.293 [INFO][4047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.305 [INFO][4047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" host="localhost" Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.318 [INFO][4047] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.327 [INFO][4047] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.331 [INFO][4047] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.335 [INFO][4047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:32.386161 containerd[1584]: 2025-07-07 06:03:32.335 [INFO][4047] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" host="localhost" Jul 7 06:03:32.386487 containerd[1584]: 2025-07-07 06:03:32.337 [INFO][4047] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06 Jul 7 06:03:32.386487 containerd[1584]: 2025-07-07 06:03:32.343 [INFO][4047] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" host="localhost" Jul 7 06:03:32.386487 containerd[1584]: 2025-07-07 06:03:32.351 [INFO][4047] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" host="localhost" Jul 7 06:03:32.386487 containerd[1584]: 2025-07-07 06:03:32.351 [INFO][4047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" host="localhost" Jul 7 06:03:32.386487 containerd[1584]: 2025-07-07 06:03:32.351 [INFO][4047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:32.386487 containerd[1584]: 2025-07-07 06:03:32.351 [INFO][4047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" HandleID="k8s-pod-network.a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Workload="localhost-k8s-csi--node--driver--hrhmt-eth0" Jul 7 06:03:32.386659 containerd[1584]: 2025-07-07 06:03:32.355 [INFO][4028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hrhmt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dde14f30-8111-4778-8695-ad893871cc92", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hrhmt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3ddc9234f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:32.386659 containerd[1584]: 2025-07-07 06:03:32.355 [INFO][4028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-eth0" Jul 7 06:03:32.386773 containerd[1584]: 2025-07-07 06:03:32.355 [INFO][4028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3ddc9234f9 ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-eth0" Jul 7 06:03:32.386773 containerd[1584]: 2025-07-07 06:03:32.368 [INFO][4028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-eth0" Jul 7 06:03:32.386861 containerd[1584]: 2025-07-07 06:03:32.368 [INFO][4028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hrhmt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dde14f30-8111-4778-8695-ad893871cc92", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06", Pod:"csi-node-driver-hrhmt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3ddc9234f9", MAC:"56:b4:58:fb:41:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:32.386949 containerd[1584]: 2025-07-07 06:03:32.380 [INFO][4028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" Namespace="calico-system" Pod="csi-node-driver-hrhmt" WorkloadEndpoint="localhost-k8s-csi--node--driver--hrhmt-eth0" Jul 7 06:03:32.450644 systemd-networkd[1503]: calibf6c2048b30: Link UP Jul 7 06:03:32.452982 systemd-networkd[1503]: calibf6c2048b30: Gained carrier Jul 7 06:03:32.472240 containerd[1584]: 2025-07-07 06:03:32.145 [INFO][4016] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:03:32.472240 containerd[1584]: 2025-07-07 06:03:32.171 [INFO][4016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0 coredns-674b8bbfcf- kube-system ef322faf-a52c-4115-8aa2-191cd5a2ce8f 847 0 2025-07-07 06:02:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-5tvbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf6c2048b30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-" Jul 7 06:03:32.472240 containerd[1584]: 2025-07-07 06:03:32.173 [INFO][4016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" Jul 7 06:03:32.472240 containerd[1584]: 2025-07-07 06:03:32.291 [INFO][4045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" HandleID="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Workload="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.292 [INFO][4045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" HandleID="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Workload="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-5tvbp", "timestamp":"2025-07-07 06:03:32.291965066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.292 [INFO][4045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.351 [INFO][4045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.351 [INFO][4045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.406 [INFO][4045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" host="localhost" Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.419 [INFO][4045] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.426 [INFO][4045] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.428 [INFO][4045] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.431 [INFO][4045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:32.472579 containerd[1584]: 2025-07-07 06:03:32.431 [INFO][4045] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" host="localhost" Jul 7 06:03:32.472890 containerd[1584]: 2025-07-07 06:03:32.432 [INFO][4045] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d Jul 7 06:03:32.472890 containerd[1584]: 2025-07-07 06:03:32.437 [INFO][4045] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" host="localhost" Jul 7 06:03:32.472890 containerd[1584]: 2025-07-07 06:03:32.443 [INFO][4045] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" host="localhost" Jul 7 06:03:32.472890 containerd[1584]: 2025-07-07 06:03:32.443 [INFO][4045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" host="localhost" Jul 7 06:03:32.472890 containerd[1584]: 2025-07-07 06:03:32.443 [INFO][4045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:32.472890 containerd[1584]: 2025-07-07 06:03:32.443 [INFO][4045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" HandleID="k8s-pod-network.c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Workload="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" Jul 7 06:03:32.473478 containerd[1584]: 2025-07-07 06:03:32.447 [INFO][4016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef322faf-a52c-4115-8aa2-191cd5a2ce8f", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-5tvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf6c2048b30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:32.473559 containerd[1584]: 2025-07-07 06:03:32.447 [INFO][4016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" Jul 7 06:03:32.473559 containerd[1584]: 2025-07-07 06:03:32.447 [INFO][4016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf6c2048b30 ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" Jul 7 06:03:32.473559 containerd[1584]: 2025-07-07 06:03:32.451 [INFO][4016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" Jul 7 06:03:32.473644 containerd[1584]: 2025-07-07 06:03:32.454 [INFO][4016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef322faf-a52c-4115-8aa2-191cd5a2ce8f", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d", Pod:"coredns-674b8bbfcf-5tvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf6c2048b30", MAC:"1e:ac:00:7b:50:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:32.473644 containerd[1584]: 2025-07-07 06:03:32.465 [INFO][4016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" Namespace="kube-system" Pod="coredns-674b8bbfcf-5tvbp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5tvbp-eth0" Jul 7 06:03:32.489733 containerd[1584]: time="2025-07-07T06:03:32.489672641Z" level=info msg="connecting to shim a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06" address="unix:///run/containerd/s/8d49ec1dffb414a7b0d61e91b4e9e23ff7ed6ff8127dae1615d914b6610095ef" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:32.512005 containerd[1584]: time="2025-07-07T06:03:32.511956190Z" level=info msg="connecting to shim c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d" address="unix:///run/containerd/s/ed56765f67d320a8f403ff961dd43f209811a4efdfee683cdfbf7ccfdebc852f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:32.519078 systemd[1]: Started cri-containerd-a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06.scope - libcontainer container a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06. Jul 7 06:03:32.539810 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:32.555476 systemd[1]: Started cri-containerd-c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d.scope - libcontainer container c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d. Jul 7 06:03:32.562856 containerd[1584]: time="2025-07-07T06:03:32.562600096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrhmt,Uid:dde14f30-8111-4778-8695-ad893871cc92,Namespace:calico-system,Attempt:0,} returns sandbox id \"a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06\"" Jul 7 06:03:32.566873 containerd[1584]: time="2025-07-07T06:03:32.566733021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:03:32.578008 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:32.622047 containerd[1584]: time="2025-07-07T06:03:32.621840228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5tvbp,Uid:ef322faf-a52c-4115-8aa2-191cd5a2ce8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d\"" Jul 7 06:03:32.624612 kubelet[2777]: E0707 06:03:32.622996 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:32.631190 containerd[1584]: time="2025-07-07T06:03:32.631130139Z" level=info msg="CreateContainer within sandbox \"c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:03:32.652398 containerd[1584]: time="2025-07-07T06:03:32.652320324Z" level=info msg="Container 388fb911a2feae7196979a93ce569f9826f5ae238fb9f131e2009eeb7a056ae4: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:32.671315 containerd[1584]: time="2025-07-07T06:03:32.671249768Z" level=info msg="CreateContainer within sandbox \"c1c56e78c1c8e1eb3fcfecc43e9f8d49647d84fe051806d02a2ffc054773da0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"388fb911a2feae7196979a93ce569f9826f5ae238fb9f131e2009eeb7a056ae4\"" Jul 7 06:03:32.672388 containerd[1584]: time="2025-07-07T06:03:32.672326361Z" level=info msg="StartContainer for \"388fb911a2feae7196979a93ce569f9826f5ae238fb9f131e2009eeb7a056ae4\"" Jul 7 06:03:32.674679 containerd[1584]: time="2025-07-07T06:03:32.674625135Z" level=info msg="connecting to shim 388fb911a2feae7196979a93ce569f9826f5ae238fb9f131e2009eeb7a056ae4" address="unix:///run/containerd/s/ed56765f67d320a8f403ff961dd43f209811a4efdfee683cdfbf7ccfdebc852f" protocol=ttrpc version=3 Jul 7 06:03:32.700468 systemd[1]: Started cri-containerd-388fb911a2feae7196979a93ce569f9826f5ae238fb9f131e2009eeb7a056ae4.scope - libcontainer container 388fb911a2feae7196979a93ce569f9826f5ae238fb9f131e2009eeb7a056ae4. Jul 7 06:03:32.738211 systemd[1]: Removed slice kubepods-besteffort-pod1598648b_004b_4f03_89d1_460edb2e0abb.slice - libcontainer container kubepods-besteffort-pod1598648b_004b_4f03_89d1_460edb2e0abb.slice. Jul 7 06:03:32.782116 containerd[1584]: time="2025-07-07T06:03:32.782062203Z" level=info msg="StartContainer for \"388fb911a2feae7196979a93ce569f9826f5ae238fb9f131e2009eeb7a056ae4\" returns successfully" Jul 7 06:03:32.871893 containerd[1584]: time="2025-07-07T06:03:32.871755780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\" id:\"edbbb50ed0cff61c6acc7e6dea8366a4d4bb4f741c886f137f3cfcfebf32cf30\" pid:4215 exit_status:1 exited_at:{seconds:1751868212 nanos:871338576}" Jul 7 06:03:33.231732 kubelet[2777]: I0707 06:03:33.231684 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1598648b-004b-4f03-89d1-460edb2e0abb" path="/var/lib/kubelet/pods/1598648b-004b-4f03-89d1-460edb2e0abb/volumes" Jul 7 06:03:33.237164 systemd[1]: Created slice kubepods-besteffort-podb212c1e1_79a6_428e_b125_17e414ae0481.slice - libcontainer container kubepods-besteffort-podb212c1e1_79a6_428e_b125_17e414ae0481.slice. Jul 7 06:03:33.318300 kubelet[2777]: I0707 06:03:33.318201 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b212c1e1-79a6-428e-b125-17e414ae0481-whisker-backend-key-pair\") pod \"whisker-7cdbf87994-t4x5q\" (UID: \"b212c1e1-79a6-428e-b125-17e414ae0481\") " pod="calico-system/whisker-7cdbf87994-t4x5q" Jul 7 06:03:33.318300 kubelet[2777]: I0707 06:03:33.318277 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v8j9\" (UniqueName: \"kubernetes.io/projected/b212c1e1-79a6-428e-b125-17e414ae0481-kube-api-access-9v8j9\") pod \"whisker-7cdbf87994-t4x5q\" (UID: \"b212c1e1-79a6-428e-b125-17e414ae0481\") " pod="calico-system/whisker-7cdbf87994-t4x5q" Jul 7 06:03:33.318300 kubelet[2777]: I0707 06:03:33.318313 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b212c1e1-79a6-428e-b125-17e414ae0481-whisker-ca-bundle\") pod \"whisker-7cdbf87994-t4x5q\" (UID: \"b212c1e1-79a6-428e-b125-17e414ae0481\") " pod="calico-system/whisker-7cdbf87994-t4x5q" Jul 7 06:03:33.618033 systemd-networkd[1503]: calia3ddc9234f9: Gained IPv6LL Jul 7 06:03:33.674445 containerd[1584]: time="2025-07-07T06:03:33.672295576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cdbf87994-t4x5q,Uid:b212c1e1-79a6-428e-b125-17e414ae0481,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:33.749102 kubelet[2777]: E0707 06:03:33.749040 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:33.781379 kubelet[2777]: I0707 06:03:33.781135 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5tvbp" podStartSLOduration=43.781095991 podStartE2EDuration="43.781095991s" podCreationTimestamp="2025-07-07 06:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:03:33.778539333 +0000 UTC m=+48.806390405" watchObservedRunningTime="2025-07-07 06:03:33.781095991 +0000 UTC m=+48.808947063" Jul 7 06:03:33.933130 containerd[1584]: time="2025-07-07T06:03:33.933062510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\" id:\"b612c591432c859994f2e2c8b2d0474d7060211ec23db26975e78d12b13b416f\" pid:4379 exit_status:1 exited_at:{seconds:1751868213 nanos:932599129}" Jul 7 06:03:33.969486 systemd-networkd[1503]: calibab6804b7f8: Link UP Jul 7 06:03:33.969742 systemd-networkd[1503]: calibab6804b7f8: Gained carrier Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.849 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7cdbf87994--t4x5q-eth0 whisker-7cdbf87994- calico-system b212c1e1-79a6-428e-b125-17e414ae0481 958 0 2025-07-07 06:03:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cdbf87994 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7cdbf87994-t4x5q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibab6804b7f8 [] [] }} ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.852 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.916 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" HandleID="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Workload="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.916 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" HandleID="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Workload="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000484860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7cdbf87994-t4x5q", "timestamp":"2025-07-07 06:03:33.916345377 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.916 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.916 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.916 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.923 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.930 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.941 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.942 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.945 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.945 [INFO][4408] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.947 [INFO][4408] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593 Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.952 [INFO][4408] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.958 [INFO][4408] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.958 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" host="localhost" Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.958 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:33.987583 containerd[1584]: 2025-07-07 06:03:33.958 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" HandleID="k8s-pod-network.8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Workload="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" Jul 7 06:03:33.988432 containerd[1584]: 2025-07-07 06:03:33.964 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cdbf87994--t4x5q-eth0", GenerateName:"whisker-7cdbf87994-", Namespace:"calico-system", SelfLink:"", UID:"b212c1e1-79a6-428e-b125-17e414ae0481", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cdbf87994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7cdbf87994-t4x5q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibab6804b7f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:33.988432 containerd[1584]: 2025-07-07 06:03:33.965 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" Jul 7 06:03:33.988432 containerd[1584]: 2025-07-07 06:03:33.965 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibab6804b7f8 ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" Jul 7 06:03:33.988432 containerd[1584]: 2025-07-07 06:03:33.967 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" Jul 7 06:03:33.988432 containerd[1584]: 2025-07-07 06:03:33.967 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cdbf87994--t4x5q-eth0", GenerateName:"whisker-7cdbf87994-", Namespace:"calico-system", SelfLink:"", UID:"b212c1e1-79a6-428e-b125-17e414ae0481", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cdbf87994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593", Pod:"whisker-7cdbf87994-t4x5q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibab6804b7f8", MAC:"06:0b:b6:85:e5:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:33.988432 containerd[1584]: 2025-07-07 06:03:33.982 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" Namespace="calico-system" Pod="whisker-7cdbf87994-t4x5q" WorkloadEndpoint="localhost-k8s-whisker--7cdbf87994--t4x5q-eth0" Jul 7 06:03:34.031162 containerd[1584]: time="2025-07-07T06:03:34.031090616Z" level=info msg="connecting to shim 8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593" address="unix:///run/containerd/s/65ea97dc0c5a46ae68c659cb438db177bfe7a3e5dc98580257367ffd7cf7a917" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:34.060053 systemd-networkd[1503]: vxlan.calico: Link UP Jul 7 06:03:34.060067 systemd-networkd[1503]: vxlan.calico: Gained carrier Jul 7 06:03:34.067053 systemd[1]: Started cri-containerd-8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593.scope - libcontainer container 8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593. Jul 7 06:03:34.091145 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:34.128478 containerd[1584]: time="2025-07-07T06:03:34.128422433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cdbf87994-t4x5q,Uid:b212c1e1-79a6-428e-b125-17e414ae0481,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593\"" Jul 7 06:03:34.386057 systemd-networkd[1503]: calibf6c2048b30: Gained IPv6LL Jul 7 06:03:34.750935 kubelet[2777]: E0707 06:03:34.750874 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:35.084951 containerd[1584]: time="2025-07-07T06:03:35.084718860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:35.099417 containerd[1584]: time="2025-07-07T06:03:35.099343702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655467f6dd-ps8wv,Uid:379858ba-c579-4b12-94c9-85bde143d2ef,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:35.124520 containerd[1584]: time="2025-07-07T06:03:35.124395250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:03:35.323765 containerd[1584]: time="2025-07-07T06:03:35.323675937Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:35.730048 systemd-networkd[1503]: vxlan.calico: Gained IPv6LL Jul 7 06:03:35.757994 containerd[1584]: time="2025-07-07T06:03:35.757945209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:35.758779 containerd[1584]: time="2025-07-07T06:03:35.758747138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.191576825s" Jul 7 06:03:35.758779 containerd[1584]: time="2025-07-07T06:03:35.758771856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:03:35.760005 containerd[1584]: time="2025-07-07T06:03:35.759846801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:03:35.793990 systemd-networkd[1503]: calibab6804b7f8: Gained IPv6LL Jul 7 06:03:35.884778 kubelet[2777]: E0707 06:03:35.884724 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:36.097896 containerd[1584]: time="2025-07-07T06:03:36.097718633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ps9vd,Uid:c1d8df6e-c02f-4b07-9613-354af2a59f1e,Namespace:calico-system,Attempt:0,}" Jul 7 06:03:36.097896 containerd[1584]: time="2025-07-07T06:03:36.097813415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-4cxtw,Uid:66efdfe5-825f-4812-bb63-e13ff3c25bc0,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:03:36.307221 containerd[1584]: time="2025-07-07T06:03:36.307168054Z" level=info msg="CreateContainer within sandbox \"a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:03:36.759525 kubelet[2777]: E0707 06:03:36.759483 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:36.775259 systemd-networkd[1503]: cali8ad7ed301db: Link UP Jul 7 06:03:36.776428 systemd-networkd[1503]: cali8ad7ed301db: Gained carrier Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.608 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0 calico-kube-controllers-655467f6dd- calico-system 379858ba-c579-4b12-94c9-85bde143d2ef 844 0 2025-07-07 06:03:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:655467f6dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-655467f6dd-ps8wv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8ad7ed301db [] [] }} ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.609 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.646 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" HandleID="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Workload="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.646 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" HandleID="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Workload="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-655467f6dd-ps8wv", "timestamp":"2025-07-07 06:03:35.646338785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.646 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.646 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.646 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.655 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.661 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.669 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.671 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.673 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.673 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.896 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:35.975 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:36.769 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:36.769 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" host="localhost" Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:36.769 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:37.169163 containerd[1584]: 2025-07-07 06:03:36.769 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" HandleID="k8s-pod-network.cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Workload="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" Jul 7 06:03:37.170181 containerd[1584]: 2025-07-07 06:03:36.772 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0", GenerateName:"calico-kube-controllers-655467f6dd-", Namespace:"calico-system", SelfLink:"", UID:"379858ba-c579-4b12-94c9-85bde143d2ef", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655467f6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-655467f6dd-ps8wv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8ad7ed301db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:37.170181 containerd[1584]: 2025-07-07 06:03:36.772 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" Jul 7 06:03:37.170181 containerd[1584]: 2025-07-07 06:03:36.772 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ad7ed301db ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" Jul 7 06:03:37.170181 containerd[1584]: 2025-07-07 06:03:36.776 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" Jul 7 06:03:37.170181 containerd[1584]: 2025-07-07 06:03:36.777 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0", GenerateName:"calico-kube-controllers-655467f6dd-", Namespace:"calico-system", SelfLink:"", UID:"379858ba-c579-4b12-94c9-85bde143d2ef", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655467f6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d", Pod:"calico-kube-controllers-655467f6dd-ps8wv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8ad7ed301db", MAC:"c2:61:3e:9f:ab:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:37.170181 containerd[1584]: 2025-07-07 06:03:37.165 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" Namespace="calico-system" Pod="calico-kube-controllers-655467f6dd-ps8wv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655467f6dd--ps8wv-eth0" Jul 7 06:03:37.485933 containerd[1584]: time="2025-07-07T06:03:37.485848040Z" level=info msg="Container eec1703a2df2d9d1c8fd3d1620c70b461265fb4cc19fb876667f370d9e74ff1e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:37.605918 systemd-networkd[1503]: calie09ba3288f1: Link UP Jul 7 06:03:37.608265 systemd-networkd[1503]: calie09ba3288f1: Gained carrier Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.321 [INFO][4590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0 calico-apiserver-86dc8db98- calico-apiserver 66efdfe5-825f-4812-bb63-e13ff3c25bc0 845 0 2025-07-07 06:02:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86dc8db98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86dc8db98-4cxtw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie09ba3288f1 [] [] }} ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.321 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.359 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" HandleID="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Workload="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.359 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" HandleID="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Workload="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86dc8db98-4cxtw", "timestamp":"2025-07-07 06:03:37.359699482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.360 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.360 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.360 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.369 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.377 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.389 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.392 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.395 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.395 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.397 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328 Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.472 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.565 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.565 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" host="localhost" Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.565 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:38.044138 containerd[1584]: 2025-07-07 06:03:37.565 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" HandleID="k8s-pod-network.ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Workload="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" Jul 7 06:03:38.045193 containerd[1584]: 2025-07-07 06:03:37.589 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0", GenerateName:"calico-apiserver-86dc8db98-", Namespace:"calico-apiserver", SelfLink:"", UID:"66efdfe5-825f-4812-bb63-e13ff3c25bc0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86dc8db98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86dc8db98-4cxtw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie09ba3288f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:38.045193 containerd[1584]: 2025-07-07 06:03:37.589 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" Jul 7 06:03:38.045193 containerd[1584]: 2025-07-07 06:03:37.589 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie09ba3288f1 ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" Jul 7 06:03:38.045193 containerd[1584]: 2025-07-07 06:03:37.617 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" Jul 7 06:03:38.045193 containerd[1584]: 2025-07-07 06:03:37.623 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0", GenerateName:"calico-apiserver-86dc8db98-", Namespace:"calico-apiserver", SelfLink:"", UID:"66efdfe5-825f-4812-bb63-e13ff3c25bc0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86dc8db98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328", Pod:"calico-apiserver-86dc8db98-4cxtw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie09ba3288f1", MAC:"b6:99:6d:fd:38:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:38.045193 containerd[1584]: 2025-07-07 06:03:38.040 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-4cxtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--4cxtw-eth0" Jul 7 06:03:38.062145 containerd[1584]: time="2025-07-07T06:03:38.061933752Z" level=info msg="CreateContainer within sandbox \"a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eec1703a2df2d9d1c8fd3d1620c70b461265fb4cc19fb876667f370d9e74ff1e\"" Jul 7 06:03:38.063182 containerd[1584]: time="2025-07-07T06:03:38.063126038Z" level=info msg="StartContainer for \"eec1703a2df2d9d1c8fd3d1620c70b461265fb4cc19fb876667f370d9e74ff1e\"" Jul 7 06:03:38.065328 containerd[1584]: time="2025-07-07T06:03:38.065278615Z" level=info msg="connecting to shim eec1703a2df2d9d1c8fd3d1620c70b461265fb4cc19fb876667f370d9e74ff1e" address="unix:///run/containerd/s/8d49ec1dffb414a7b0d61e91b4e9e23ff7ed6ff8127dae1615d914b6610095ef" protocol=ttrpc version=3 Jul 7 06:03:38.100122 systemd[1]: Started cri-containerd-eec1703a2df2d9d1c8fd3d1620c70b461265fb4cc19fb876667f370d9e74ff1e.scope - libcontainer container eec1703a2df2d9d1c8fd3d1620c70b461265fb4cc19fb876667f370d9e74ff1e. Jul 7 06:03:38.347481 systemd-networkd[1503]: cali933a97f0fea: Link UP Jul 7 06:03:38.348946 systemd-networkd[1503]: cali933a97f0fea: Gained carrier Jul 7 06:03:38.386078 containerd[1584]: time="2025-07-07T06:03:38.386002730Z" level=info msg="StartContainer for \"eec1703a2df2d9d1c8fd3d1620c70b461265fb4cc19fb876667f370d9e74ff1e\" returns successfully" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:37.478 [INFO][4612] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0 goldmane-768f4c5c69- calico-system c1d8df6e-c02f-4b07-9613-354af2a59f1e 848 0 2025-07-07 06:03:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-ps9vd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali933a97f0fea [] [] }} ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:37.478 [INFO][4612] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:37.651 [INFO][4626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" HandleID="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Workload="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:37.651 [INFO][4626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" HandleID="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Workload="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a56f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-ps9vd", "timestamp":"2025-07-07 06:03:37.651065297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:37.651 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:37.651 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:37.651 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.041 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.055 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.064 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.085 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.090 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.091 [INFO][4626] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.093 [INFO][4626] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.248 [INFO][4626] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.339 [INFO][4626] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.340 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" host="localhost" Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.340 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:38.469830 containerd[1584]: 2025-07-07 06:03:38.340 [INFO][4626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" HandleID="k8s-pod-network.268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Workload="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" Jul 7 06:03:38.470596 containerd[1584]: 2025-07-07 06:03:38.343 [INFO][4612] cni-plugin/k8s.go 418: Populated endpoint ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c1d8df6e-c02f-4b07-9613-354af2a59f1e", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-ps9vd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali933a97f0fea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:38.470596 containerd[1584]: 2025-07-07 06:03:38.343 [INFO][4612] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" Jul 7 06:03:38.470596 containerd[1584]: 2025-07-07 06:03:38.343 [INFO][4612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali933a97f0fea ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" Jul 7 06:03:38.470596 containerd[1584]: 2025-07-07 06:03:38.348 [INFO][4612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" Jul 7 06:03:38.470596 containerd[1584]: 2025-07-07 06:03:38.352 [INFO][4612] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c1d8df6e-c02f-4b07-9613-354af2a59f1e", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab", Pod:"goldmane-768f4c5c69-ps9vd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali933a97f0fea", MAC:"fe:db:4d:d8:da:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:38.470596 containerd[1584]: 2025-07-07 06:03:38.462 [INFO][4612] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" Namespace="calico-system" Pod="goldmane-768f4c5c69-ps9vd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ps9vd-eth0" Jul 7 06:03:38.802200 systemd-networkd[1503]: cali8ad7ed301db: Gained IPv6LL Jul 7 06:03:39.016252 containerd[1584]: time="2025-07-07T06:03:39.015840841Z" level=info msg="connecting to shim cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d" address="unix:///run/containerd/s/2d72053b39272235847e6fa9886bed8b9b4cb1772390e82552f812f566d0cbae" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:39.058422 systemd[1]: Started cri-containerd-cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d.scope - libcontainer container cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d. Jul 7 06:03:39.074398 containerd[1584]: time="2025-07-07T06:03:39.074299033Z" level=info msg="connecting to shim 268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab" address="unix:///run/containerd/s/4036d4a62c4f3868ca58e8583cf12eed15c23d2ed257264a7ee545287110c0c3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:39.077985 containerd[1584]: time="2025-07-07T06:03:39.077918628Z" level=info msg="connecting to shim ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328" address="unix:///run/containerd/s/d8cb5b65f74de0b2bda824081fcaae1e0edd1962a974551cc0b1f1dc9a66aaf6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:39.102876 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:39.108959 systemd[1]: Started cri-containerd-268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab.scope - libcontainer container 268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab. Jul 7 06:03:39.121183 systemd[1]: Started cri-containerd-ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328.scope - libcontainer container ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328. Jul 7 06:03:39.129344 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:39.156175 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:39.186157 systemd-networkd[1503]: calie09ba3288f1: Gained IPv6LL Jul 7 06:03:39.316393 containerd[1584]: time="2025-07-07T06:03:39.316205827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655467f6dd-ps8wv,Uid:379858ba-c579-4b12-94c9-85bde143d2ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d\"" Jul 7 06:03:39.384263 containerd[1584]: time="2025-07-07T06:03:39.384189620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ps9vd,Uid:c1d8df6e-c02f-4b07-9613-354af2a59f1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab\"" Jul 7 06:03:39.549027 containerd[1584]: time="2025-07-07T06:03:39.548968821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-4cxtw,Uid:66efdfe5-825f-4812-bb63-e13ff3c25bc0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328\"" Jul 7 06:03:39.890053 systemd-networkd[1503]: cali933a97f0fea: Gained IPv6LL Jul 7 06:03:42.640085 containerd[1584]: time="2025-07-07T06:03:42.639929034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:42.790594 containerd[1584]: time="2025-07-07T06:03:42.790442145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:03:42.851847 containerd[1584]: time="2025-07-07T06:03:42.851764831Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:42.920692 containerd[1584]: time="2025-07-07T06:03:42.920542107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:42.921474 containerd[1584]: time="2025-07-07T06:03:42.921433792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 7.161522518s" Jul 7 06:03:42.921537 containerd[1584]: time="2025-07-07T06:03:42.921475723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:03:42.922466 containerd[1584]: time="2025-07-07T06:03:42.922440949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:03:43.034035 systemd[1]: Started sshd@7-10.0.0.25:22-10.0.0.1:55194.service - OpenSSH per-connection server daemon (10.0.0.1:55194). Jul 7 06:03:43.119781 containerd[1584]: time="2025-07-07T06:03:43.119715584Z" level=info msg="CreateContainer within sandbox \"8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:03:43.151822 sshd[4836]: Accepted publickey for core from 10.0.0.1 port 55194 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:03:43.159200 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:43.168305 systemd-logind[1565]: New session 8 of user core. Jul 7 06:03:43.175410 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:03:43.259584 containerd[1584]: time="2025-07-07T06:03:43.258637064Z" level=info msg="Container 7b2c63d98fe44f60fc3da245955fcad2b14477265491ada93ac3b61d5286ceca: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:43.459299 containerd[1584]: time="2025-07-07T06:03:43.459117390Z" level=info msg="CreateContainer within sandbox \"8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7b2c63d98fe44f60fc3da245955fcad2b14477265491ada93ac3b61d5286ceca\"" Jul 7 06:03:43.460429 containerd[1584]: time="2025-07-07T06:03:43.460382309Z" level=info msg="StartContainer for \"7b2c63d98fe44f60fc3da245955fcad2b14477265491ada93ac3b61d5286ceca\"" Jul 7 06:03:43.462103 containerd[1584]: time="2025-07-07T06:03:43.462070486Z" level=info msg="connecting to shim 7b2c63d98fe44f60fc3da245955fcad2b14477265491ada93ac3b61d5286ceca" address="unix:///run/containerd/s/65ea97dc0c5a46ae68c659cb438db177bfe7a3e5dc98580257367ffd7cf7a917" protocol=ttrpc version=3 Jul 7 06:03:43.493116 systemd[1]: Started cri-containerd-7b2c63d98fe44f60fc3da245955fcad2b14477265491ada93ac3b61d5286ceca.scope - libcontainer container 7b2c63d98fe44f60fc3da245955fcad2b14477265491ada93ac3b61d5286ceca. Jul 7 06:03:43.537853 sshd[4839]: Connection closed by 10.0.0.1 port 55194 Jul 7 06:03:43.537808 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:43.543617 systemd[1]: sshd@7-10.0.0.25:22-10.0.0.1:55194.service: Deactivated successfully. Jul 7 06:03:43.546398 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:03:43.548023 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:03:43.549522 systemd-logind[1565]: Removed session 8. Jul 7 06:03:43.700104 containerd[1584]: time="2025-07-07T06:03:43.700024981Z" level=info msg="StartContainer for \"7b2c63d98fe44f60fc3da245955fcad2b14477265491ada93ac3b61d5286ceca\" returns successfully" Jul 7 06:03:44.098484 containerd[1584]: time="2025-07-07T06:03:44.098399466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-qlhjq,Uid:e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:03:44.269081 systemd-networkd[1503]: calid758e799227: Link UP Jul 7 06:03:44.269333 systemd-networkd[1503]: calid758e799227: Gained carrier Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.152 [INFO][4891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0 calico-apiserver-86dc8db98- calico-apiserver e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b 841 0 2025-07-07 06:02:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86dc8db98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86dc8db98-qlhjq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid758e799227 [] [] }} ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.152 [INFO][4891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.195 [INFO][4905] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" HandleID="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Workload="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.195 [INFO][4905] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" HandleID="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Workload="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86dc8db98-qlhjq", "timestamp":"2025-07-07 06:03:44.195386396 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.195 [INFO][4905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.195 [INFO][4905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.195 [INFO][4905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.208 [INFO][4905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.216 [INFO][4905] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.225 [INFO][4905] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.228 [INFO][4905] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.233 [INFO][4905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.233 [INFO][4905] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.236 [INFO][4905] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0 Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.249 [INFO][4905] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.258 [INFO][4905] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.258 [INFO][4905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" host="localhost" Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.258 [INFO][4905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:44.290807 containerd[1584]: 2025-07-07 06:03:44.258 [INFO][4905] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" HandleID="k8s-pod-network.013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Workload="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" Jul 7 06:03:44.291614 containerd[1584]: 2025-07-07 06:03:44.263 [INFO][4891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0", GenerateName:"calico-apiserver-86dc8db98-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86dc8db98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86dc8db98-qlhjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid758e799227", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:44.291614 containerd[1584]: 2025-07-07 06:03:44.264 [INFO][4891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" Jul 7 06:03:44.291614 containerd[1584]: 2025-07-07 06:03:44.264 [INFO][4891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid758e799227 ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" Jul 7 06:03:44.291614 containerd[1584]: 2025-07-07 06:03:44.269 [INFO][4891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" Jul 7 06:03:44.291614 containerd[1584]: 2025-07-07 06:03:44.271 [INFO][4891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0", GenerateName:"calico-apiserver-86dc8db98-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86dc8db98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0", Pod:"calico-apiserver-86dc8db98-qlhjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid758e799227", MAC:"de:d2:3c:34:7b:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:44.291614 containerd[1584]: 2025-07-07 06:03:44.286 [INFO][4891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" Namespace="calico-apiserver" Pod="calico-apiserver-86dc8db98-qlhjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--86dc8db98--qlhjq-eth0" Jul 7 06:03:44.535264 containerd[1584]: time="2025-07-07T06:03:44.535172631Z" level=info msg="connecting to shim 013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0" address="unix:///run/containerd/s/41430e76d376c44a0ecd4428efb14aadd1751f79bb5558e9e0892dd5a6599fda" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:44.570468 systemd[1]: Started cri-containerd-013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0.scope - libcontainer container 013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0. Jul 7 06:03:44.591833 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:44.636343 containerd[1584]: time="2025-07-07T06:03:44.636270688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86dc8db98-qlhjq,Uid:e6339c2e-003c-4ec5-a6c2-fbcf59fbe45b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0\"" Jul 7 06:03:45.307299 containerd[1584]: time="2025-07-07T06:03:45.307209266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:45.324278 containerd[1584]: time="2025-07-07T06:03:45.324105530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:03:45.336893 containerd[1584]: time="2025-07-07T06:03:45.336780960Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:45.352485 containerd[1584]: time="2025-07-07T06:03:45.352407057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:45.353200 containerd[1584]: time="2025-07-07T06:03:45.353157741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.430685422s" Jul 7 06:03:45.353255 containerd[1584]: time="2025-07-07T06:03:45.353206264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:03:45.354142 containerd[1584]: time="2025-07-07T06:03:45.354120109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:03:45.434602 containerd[1584]: time="2025-07-07T06:03:45.434544314Z" level=info msg="CreateContainer within sandbox \"a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:03:45.544118 containerd[1584]: time="2025-07-07T06:03:45.543237684Z" level=info msg="Container b21cc6fc2b186cf5883f393ed4b3f49b152a624baf52bc17c5d6d74439e02c3a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:45.681979 containerd[1584]: time="2025-07-07T06:03:45.681919765Z" level=info msg="CreateContainer within sandbox \"a342ea3cc97404f529c0fc7fd53deecb9cf66e7633117fd11e19fd0ba94c7b06\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b21cc6fc2b186cf5883f393ed4b3f49b152a624baf52bc17c5d6d74439e02c3a\"" Jul 7 06:03:45.682579 containerd[1584]: time="2025-07-07T06:03:45.682538377Z" level=info msg="StartContainer for \"b21cc6fc2b186cf5883f393ed4b3f49b152a624baf52bc17c5d6d74439e02c3a\"" Jul 7 06:03:45.684396 containerd[1584]: time="2025-07-07T06:03:45.684355878Z" level=info msg="connecting to shim b21cc6fc2b186cf5883f393ed4b3f49b152a624baf52bc17c5d6d74439e02c3a" address="unix:///run/containerd/s/8d49ec1dffb414a7b0d61e91b4e9e23ff7ed6ff8127dae1615d914b6610095ef" protocol=ttrpc version=3 Jul 7 06:03:45.719096 systemd[1]: Started cri-containerd-b21cc6fc2b186cf5883f393ed4b3f49b152a624baf52bc17c5d6d74439e02c3a.scope - libcontainer container b21cc6fc2b186cf5883f393ed4b3f49b152a624baf52bc17c5d6d74439e02c3a. Jul 7 06:03:45.842083 systemd-networkd[1503]: calid758e799227: Gained IPv6LL Jul 7 06:03:46.098330 kubelet[2777]: E0707 06:03:46.098140 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:46.098970 containerd[1584]: time="2025-07-07T06:03:46.098670245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mgc94,Uid:43892931-e098-4310-a41c-4bce294d590b,Namespace:kube-system,Attempt:0,}" Jul 7 06:03:46.335141 containerd[1584]: time="2025-07-07T06:03:46.335093333Z" level=info msg="StartContainer for \"b21cc6fc2b186cf5883f393ed4b3f49b152a624baf52bc17c5d6d74439e02c3a\" returns successfully" Jul 7 06:03:46.672974 systemd-networkd[1503]: cali1bf749ce2f1: Link UP Jul 7 06:03:46.673846 systemd-networkd[1503]: cali1bf749ce2f1: Gained carrier Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.584 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mgc94-eth0 coredns-674b8bbfcf- kube-system 43892931-e098-4310-a41c-4bce294d590b 846 0 2025-07-07 06:02:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mgc94 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1bf749ce2f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.584 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.613 [INFO][5028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" HandleID="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Workload="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.613 [INFO][5028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" HandleID="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Workload="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mgc94", "timestamp":"2025-07-07 06:03:46.61320376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.613 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.613 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.613 [INFO][5028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.621 [INFO][5028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.629 [INFO][5028] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.636 [INFO][5028] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.638 [INFO][5028] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.641 [INFO][5028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.641 [INFO][5028] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.643 [INFO][5028] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.654 [INFO][5028] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.666 [INFO][5028] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.666 [INFO][5028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" host="localhost" Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.666 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:03:46.693168 containerd[1584]: 2025-07-07 06:03:46.666 [INFO][5028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" HandleID="k8s-pod-network.9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Workload="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" Jul 7 06:03:46.694046 containerd[1584]: 2025-07-07 06:03:46.670 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mgc94-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"43892931-e098-4310-a41c-4bce294d590b", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mgc94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bf749ce2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:46.694046 containerd[1584]: 2025-07-07 06:03:46.671 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" Jul 7 06:03:46.694046 containerd[1584]: 2025-07-07 06:03:46.671 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bf749ce2f1 ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" Jul 7 06:03:46.694046 containerd[1584]: 2025-07-07 06:03:46.673 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" Jul 7 06:03:46.694046 containerd[1584]: 2025-07-07 06:03:46.673 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mgc94-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"43892931-e098-4310-a41c-4bce294d590b", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf", Pod:"coredns-674b8bbfcf-mgc94", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bf749ce2f1", MAC:"e6:91:3c:ae:b3:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:03:46.694046 containerd[1584]: 2025-07-07 06:03:46.688 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" Namespace="kube-system" Pod="coredns-674b8bbfcf-mgc94" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mgc94-eth0" Jul 7 06:03:46.707623 kubelet[2777]: I0707 06:03:46.707430 2777 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:03:46.708782 kubelet[2777]: I0707 06:03:46.708743 2777 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:03:46.736893 containerd[1584]: time="2025-07-07T06:03:46.736820184Z" level=info msg="connecting to shim 9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf" address="unix:///run/containerd/s/c42505630f16587dd2dce8e9ebd656c799c6cfbda3ac8f09ca31901c8586471d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:03:46.772032 systemd[1]: Started cri-containerd-9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf.scope - libcontainer container 9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf. Jul 7 06:03:46.791856 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:03:46.886015 containerd[1584]: time="2025-07-07T06:03:46.885934576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mgc94,Uid:43892931-e098-4310-a41c-4bce294d590b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf\"" Jul 7 06:03:46.886989 kubelet[2777]: E0707 06:03:46.886959 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:46.957435 containerd[1584]: time="2025-07-07T06:03:46.957313150Z" level=info msg="CreateContainer within sandbox \"9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:03:47.050855 containerd[1584]: time="2025-07-07T06:03:47.050779398Z" level=info msg="Container fc45187be9f267df54eb0ec7533aee6acb93062ddc6ccdd2f0cc59d05bcf95cf: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:47.058363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098616938.mount: Deactivated successfully. Jul 7 06:03:47.065274 containerd[1584]: time="2025-07-07T06:03:47.065205748Z" level=info msg="CreateContainer within sandbox \"9f0a7efd07f76d067fe1466a916324c9bbb9255ddb432bf1673eaa2c7b760ecf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc45187be9f267df54eb0ec7533aee6acb93062ddc6ccdd2f0cc59d05bcf95cf\"" Jul 7 06:03:47.066095 containerd[1584]: time="2025-07-07T06:03:47.066039399Z" level=info msg="StartContainer for \"fc45187be9f267df54eb0ec7533aee6acb93062ddc6ccdd2f0cc59d05bcf95cf\"" Jul 7 06:03:47.067416 containerd[1584]: time="2025-07-07T06:03:47.067374667Z" level=info msg="connecting to shim fc45187be9f267df54eb0ec7533aee6acb93062ddc6ccdd2f0cc59d05bcf95cf" address="unix:///run/containerd/s/c42505630f16587dd2dce8e9ebd656c799c6cfbda3ac8f09ca31901c8586471d" protocol=ttrpc version=3 Jul 7 06:03:47.093013 systemd[1]: Started cri-containerd-fc45187be9f267df54eb0ec7533aee6acb93062ddc6ccdd2f0cc59d05bcf95cf.scope - libcontainer container fc45187be9f267df54eb0ec7533aee6acb93062ddc6ccdd2f0cc59d05bcf95cf. Jul 7 06:03:47.131117 containerd[1584]: time="2025-07-07T06:03:47.131048766Z" level=info msg="StartContainer for \"fc45187be9f267df54eb0ec7533aee6acb93062ddc6ccdd2f0cc59d05bcf95cf\" returns successfully" Jul 7 06:03:47.342551 kubelet[2777]: E0707 06:03:47.342088 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:47.394372 kubelet[2777]: I0707 06:03:47.394069 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hrhmt" podStartSLOduration=31.605866967 podStartE2EDuration="44.394047282s" podCreationTimestamp="2025-07-07 06:03:03 +0000 UTC" firstStartedPulling="2025-07-07 06:03:32.565821718 +0000 UTC m=+47.593672790" lastFinishedPulling="2025-07-07 06:03:45.354002043 +0000 UTC m=+60.381853105" observedRunningTime="2025-07-07 06:03:47.393645476 +0000 UTC m=+62.421496548" watchObservedRunningTime="2025-07-07 06:03:47.394047282 +0000 UTC m=+62.421898354" Jul 7 06:03:47.394372 kubelet[2777]: I0707 06:03:47.394279 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mgc94" podStartSLOduration=57.394273694 podStartE2EDuration="57.394273694s" podCreationTimestamp="2025-07-07 06:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:03:47.367311363 +0000 UTC m=+62.395162435" watchObservedRunningTime="2025-07-07 06:03:47.394273694 +0000 UTC m=+62.422124766" Jul 7 06:03:47.890092 systemd-networkd[1503]: cali1bf749ce2f1: Gained IPv6LL Jul 7 06:03:48.345319 kubelet[2777]: E0707 06:03:48.345208 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:48.556112 systemd[1]: Started sshd@8-10.0.0.25:22-10.0.0.1:55206.service - OpenSSH per-connection server daemon (10.0.0.1:55206). Jul 7 06:03:48.577983 containerd[1584]: time="2025-07-07T06:03:48.577903726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:48.579333 containerd[1584]: time="2025-07-07T06:03:48.579300270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:03:48.580971 containerd[1584]: time="2025-07-07T06:03:48.580931361Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:48.583568 containerd[1584]: time="2025-07-07T06:03:48.583526392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:48.584252 containerd[1584]: time="2025-07-07T06:03:48.584204435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.230042396s" Jul 7 06:03:48.584307 containerd[1584]: time="2025-07-07T06:03:48.584260522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:03:48.588054 containerd[1584]: time="2025-07-07T06:03:48.588000636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:03:48.605691 containerd[1584]: time="2025-07-07T06:03:48.605556162Z" level=info msg="CreateContainer within sandbox \"cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:03:48.618206 containerd[1584]: time="2025-07-07T06:03:48.617312006Z" level=info msg="Container 83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:48.634490 containerd[1584]: time="2025-07-07T06:03:48.634341579Z" level=info msg="CreateContainer within sandbox \"cdbdade844bc57e355414317927dc36b132537af959f3e789e4b802df90d8d6d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061\"" Jul 7 06:03:48.635269 containerd[1584]: time="2025-07-07T06:03:48.635207710Z" level=info msg="StartContainer for \"83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061\"" Jul 7 06:03:48.636399 containerd[1584]: time="2025-07-07T06:03:48.636373254Z" level=info msg="connecting to shim 83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061" address="unix:///run/containerd/s/2d72053b39272235847e6fa9886bed8b9b4cb1772390e82552f812f566d0cbae" protocol=ttrpc version=3 Jul 7 06:03:48.649966 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 55206 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:03:48.651466 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:48.663255 systemd[1]: Started cri-containerd-83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061.scope - libcontainer container 83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061. Jul 7 06:03:48.667578 systemd-logind[1565]: New session 9 of user core. Jul 7 06:03:48.675047 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:03:48.875871 containerd[1584]: time="2025-07-07T06:03:48.875449990Z" level=info msg="StartContainer for \"83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061\" returns successfully" Jul 7 06:03:48.984770 sshd[5159]: Connection closed by 10.0.0.1 port 55206 Jul 7 06:03:48.985179 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:48.991322 systemd[1]: sshd@8-10.0.0.25:22-10.0.0.1:55206.service: Deactivated successfully. Jul 7 06:03:48.994326 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:03:48.995287 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:03:48.997424 systemd-logind[1565]: Removed session 9. Jul 7 06:03:49.353086 kubelet[2777]: E0707 06:03:49.353041 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:03:49.386522 kubelet[2777]: I0707 06:03:49.386420 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-655467f6dd-ps8wv" podStartSLOduration=37.1187613 podStartE2EDuration="46.386405576s" podCreationTimestamp="2025-07-07 06:03:03 +0000 UTC" firstStartedPulling="2025-07-07 06:03:39.318018731 +0000 UTC m=+54.345869793" lastFinishedPulling="2025-07-07 06:03:48.585662997 +0000 UTC m=+63.613514069" observedRunningTime="2025-07-07 06:03:49.386146562 +0000 UTC m=+64.413997634" watchObservedRunningTime="2025-07-07 06:03:49.386405576 +0000 UTC m=+64.414256648" Jul 7 06:03:50.624479 containerd[1584]: time="2025-07-07T06:03:50.624433184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061\" id:\"14f8d3e7c4886f7d6ba621c03fc27e8d71dd4cccd6920a7d4ecc27ae7d68519a\" pid:5209 exited_at:{seconds:1751868230 nanos:623215253}" Jul 7 06:03:52.270475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409403385.mount: Deactivated successfully. Jul 7 06:03:53.439319 containerd[1584]: time="2025-07-07T06:03:53.439242237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:53.440153 containerd[1584]: time="2025-07-07T06:03:53.440077397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:03:53.441581 containerd[1584]: time="2025-07-07T06:03:53.441545943Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:53.445072 containerd[1584]: time="2025-07-07T06:03:53.445012762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:03:53.445561 containerd[1584]: time="2025-07-07T06:03:53.445514076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.857463275s" Jul 7 06:03:53.445561 containerd[1584]: time="2025-07-07T06:03:53.445544594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:03:53.447116 containerd[1584]: time="2025-07-07T06:03:53.446667432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:03:53.451174 containerd[1584]: time="2025-07-07T06:03:53.451139666Z" level=info msg="CreateContainer within sandbox \"268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:03:53.459684 containerd[1584]: time="2025-07-07T06:03:53.459647942Z" level=info msg="Container eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:03:53.468070 containerd[1584]: time="2025-07-07T06:03:53.468030689Z" level=info msg="CreateContainer within sandbox \"268f72bc07faf81f8cacfa3bf40dde45cf33602e6f2a031b1e27efb5365f2aab\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38\"" Jul 7 06:03:53.468630 containerd[1584]: time="2025-07-07T06:03:53.468579474Z" level=info msg="StartContainer for \"eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38\"" Jul 7 06:03:53.469710 containerd[1584]: time="2025-07-07T06:03:53.469672845Z" level=info msg="connecting to shim eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38" address="unix:///run/containerd/s/4036d4a62c4f3868ca58e8583cf12eed15c23d2ed257264a7ee545287110c0c3" protocol=ttrpc version=3 Jul 7 06:03:53.510455 systemd[1]: Started cri-containerd-eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38.scope - libcontainer container eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38. Jul 7 06:03:53.585778 containerd[1584]: time="2025-07-07T06:03:53.585710376Z" level=info msg="StartContainer for \"eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38\" returns successfully" Jul 7 06:03:54.001984 systemd[1]: Started sshd@9-10.0.0.25:22-10.0.0.1:43290.service - OpenSSH per-connection server daemon (10.0.0.1:43290). Jul 7 06:03:54.056901 sshd[5270]: Accepted publickey for core from 10.0.0.1 port 43290 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:03:54.058502 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:54.064872 systemd-logind[1565]: New session 10 of user core. Jul 7 06:03:54.070955 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:03:54.241140 sshd[5272]: Connection closed by 10.0.0.1 port 43290 Jul 7 06:03:54.241492 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:54.246095 systemd[1]: sshd@9-10.0.0.25:22-10.0.0.1:43290.service: Deactivated successfully. Jul 7 06:03:54.248588 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:03:54.249669 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:03:54.251990 systemd-logind[1565]: Removed session 10. Jul 7 06:03:54.394556 kubelet[2777]: I0707 06:03:54.394462 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-ps9vd" podStartSLOduration=37.333301772 podStartE2EDuration="51.394439108s" podCreationTimestamp="2025-07-07 06:03:03 +0000 UTC" firstStartedPulling="2025-07-07 06:03:39.38537997 +0000 UTC m=+54.413231032" lastFinishedPulling="2025-07-07 06:03:53.446517296 +0000 UTC m=+68.474368368" observedRunningTime="2025-07-07 06:03:54.393718377 +0000 UTC m=+69.421569449" watchObservedRunningTime="2025-07-07 06:03:54.394439108 +0000 UTC m=+69.422290180" Jul 7 06:03:54.448180 containerd[1584]: time="2025-07-07T06:03:54.448049955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38\" id:\"a73e95efce13f4877523f4998f6e47a471a2bd1169cf51cec40f94047e10fa51\" pid:5318 exit_status:1 exited_at:{seconds:1751868234 nanos:447585281}" Jul 7 06:03:55.460382 containerd[1584]: time="2025-07-07T06:03:55.460320831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38\" id:\"89c29fa924d1539f3c8ed6269e6821bcc68a3db59986e42754b0e523d0437034\" pid:5353 exit_status:1 exited_at:{seconds:1751868235 nanos:459926321}" Jul 7 06:03:59.258357 systemd[1]: Started sshd@10-10.0.0.25:22-10.0.0.1:43304.service - OpenSSH per-connection server daemon (10.0.0.1:43304). Jul 7 06:03:59.331769 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 43304 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:03:59.340167 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:59.345409 systemd-logind[1565]: New session 11 of user core. Jul 7 06:03:59.358933 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:03:59.417081 containerd[1584]: time="2025-07-07T06:03:59.417009488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061\" id:\"9498c82109cc8c13e66b993004ce19c35966d3b319119186162914361e1f38b1\" pid:5382 exited_at:{seconds:1751868239 nanos:416535859}" Jul 7 06:03:59.835548 sshd[5395]: Connection closed by 10.0.0.1 port 43304 Jul 7 06:03:59.835904 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:59.840592 systemd[1]: sshd@10-10.0.0.25:22-10.0.0.1:43304.service: Deactivated successfully. Jul 7 06:03:59.842727 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:03:59.843621 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:03:59.845154 systemd-logind[1565]: Removed session 11. Jul 7 06:04:00.493275 containerd[1584]: time="2025-07-07T06:04:00.493171190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:00.548212 containerd[1584]: time="2025-07-07T06:04:00.548104859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:04:00.562195 containerd[1584]: time="2025-07-07T06:04:00.562100057Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:00.568222 containerd[1584]: time="2025-07-07T06:04:00.568140920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:00.569246 containerd[1584]: time="2025-07-07T06:04:00.569173831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 7.122470241s" Jul 7 06:04:00.569246 containerd[1584]: time="2025-07-07T06:04:00.569208668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:04:00.570682 containerd[1584]: time="2025-07-07T06:04:00.570622373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:04:00.612091 containerd[1584]: time="2025-07-07T06:04:00.612024292Z" level=info msg="CreateContainer within sandbox \"ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:04:00.652686 containerd[1584]: time="2025-07-07T06:04:00.651957201Z" level=info msg="Container 44547d166008948fafab2172da6fbcc5e511b577d57035c3304b151f3b433a0e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:00.664211 containerd[1584]: time="2025-07-07T06:04:00.664135019Z" level=info msg="CreateContainer within sandbox \"ed0589da18efdc0617e9ddc20cd4fc558b9d9f6dca5280e6e0515ca6d579c328\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"44547d166008948fafab2172da6fbcc5e511b577d57035c3304b151f3b433a0e\"" Jul 7 06:04:00.664978 containerd[1584]: time="2025-07-07T06:04:00.664927334Z" level=info msg="StartContainer for \"44547d166008948fafab2172da6fbcc5e511b577d57035c3304b151f3b433a0e\"" Jul 7 06:04:00.666606 containerd[1584]: time="2025-07-07T06:04:00.666562470Z" level=info msg="connecting to shim 44547d166008948fafab2172da6fbcc5e511b577d57035c3304b151f3b433a0e" address="unix:///run/containerd/s/d8cb5b65f74de0b2bda824081fcaae1e0edd1962a974551cc0b1f1dc9a66aaf6" protocol=ttrpc version=3 Jul 7 06:04:00.701152 systemd[1]: Started cri-containerd-44547d166008948fafab2172da6fbcc5e511b577d57035c3304b151f3b433a0e.scope - libcontainer container 44547d166008948fafab2172da6fbcc5e511b577d57035c3304b151f3b433a0e. Jul 7 06:04:00.762940 containerd[1584]: time="2025-07-07T06:04:00.762376587Z" level=info msg="StartContainer for \"44547d166008948fafab2172da6fbcc5e511b577d57035c3304b151f3b433a0e\" returns successfully" Jul 7 06:04:02.467834 kubelet[2777]: I0707 06:04:02.466901 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86dc8db98-4cxtw" podStartSLOduration=42.446874296 podStartE2EDuration="1m3.466715599s" podCreationTimestamp="2025-07-07 06:02:59 +0000 UTC" firstStartedPulling="2025-07-07 06:03:39.550550912 +0000 UTC m=+54.578401984" lastFinishedPulling="2025-07-07 06:04:00.570392215 +0000 UTC m=+75.598243287" observedRunningTime="2025-07-07 06:04:01.633145834 +0000 UTC m=+76.660996916" watchObservedRunningTime="2025-07-07 06:04:02.466715599 +0000 UTC m=+77.494566671" Jul 7 06:04:02.974980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437637036.mount: Deactivated successfully. Jul 7 06:04:03.000259 containerd[1584]: time="2025-07-07T06:04:03.000185333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:03.001296 containerd[1584]: time="2025-07-07T06:04:03.001210769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:04:03.002832 containerd[1584]: time="2025-07-07T06:04:03.002714213Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:03.005267 containerd[1584]: time="2025-07-07T06:04:03.005213935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:03.006250 containerd[1584]: time="2025-07-07T06:04:03.006144932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.435466754s" Jul 7 06:04:03.006250 containerd[1584]: time="2025-07-07T06:04:03.006178446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:04:03.008388 containerd[1584]: time="2025-07-07T06:04:03.008296154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:04:03.013336 containerd[1584]: time="2025-07-07T06:04:03.013274711Z" level=info msg="CreateContainer within sandbox \"8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:04:03.026997 containerd[1584]: time="2025-07-07T06:04:03.026148791Z" level=info msg="Container 940d81c87c62ef5e3c390b82376f5d926be23c8ae06843f048452da431c6311f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:03.053535 containerd[1584]: time="2025-07-07T06:04:03.053467279Z" level=info msg="CreateContainer within sandbox \"8c55decebdd9659f726482503646fdd4428a3faf71b0c3212d49686960ea7593\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"940d81c87c62ef5e3c390b82376f5d926be23c8ae06843f048452da431c6311f\"" Jul 7 06:04:03.054143 containerd[1584]: time="2025-07-07T06:04:03.054101512Z" level=info msg="StartContainer for \"940d81c87c62ef5e3c390b82376f5d926be23c8ae06843f048452da431c6311f\"" Jul 7 06:04:03.055461 containerd[1584]: time="2025-07-07T06:04:03.055420755Z" level=info msg="connecting to shim 940d81c87c62ef5e3c390b82376f5d926be23c8ae06843f048452da431c6311f" address="unix:///run/containerd/s/65ea97dc0c5a46ae68c659cb438db177bfe7a3e5dc98580257367ffd7cf7a917" protocol=ttrpc version=3 Jul 7 06:04:03.078967 systemd[1]: Started cri-containerd-940d81c87c62ef5e3c390b82376f5d926be23c8ae06843f048452da431c6311f.scope - libcontainer container 940d81c87c62ef5e3c390b82376f5d926be23c8ae06843f048452da431c6311f. Jul 7 06:04:03.473273 containerd[1584]: time="2025-07-07T06:04:03.473198780Z" level=info msg="StartContainer for \"940d81c87c62ef5e3c390b82376f5d926be23c8ae06843f048452da431c6311f\" returns successfully" Jul 7 06:04:03.782922 containerd[1584]: time="2025-07-07T06:04:03.782772362Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:03.802234 containerd[1584]: time="2025-07-07T06:04:03.802162144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:04:03.804575 containerd[1584]: time="2025-07-07T06:04:03.804536170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 796.192386ms" Jul 7 06:04:03.804575 containerd[1584]: time="2025-07-07T06:04:03.804575464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:04:03.815501 containerd[1584]: time="2025-07-07T06:04:03.815441803Z" level=info msg="CreateContainer within sandbox \"013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:04:03.826734 containerd[1584]: time="2025-07-07T06:04:03.826666723Z" level=info msg="Container 8a0c0526cf7ed8c64d53d1dbbc9027fc649c877ce3be2eb6e7513f1253346cf4: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:04:03.854815 containerd[1584]: time="2025-07-07T06:04:03.854752066Z" level=info msg="CreateContainer within sandbox \"013b9a93e59508c12bb12facab2438f1f06ac18b2ecb0a226c3cc54b9f8510a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a0c0526cf7ed8c64d53d1dbbc9027fc649c877ce3be2eb6e7513f1253346cf4\"" Jul 7 06:04:03.855525 containerd[1584]: time="2025-07-07T06:04:03.855485348Z" level=info msg="StartContainer for \"8a0c0526cf7ed8c64d53d1dbbc9027fc649c877ce3be2eb6e7513f1253346cf4\"" Jul 7 06:04:03.856652 containerd[1584]: time="2025-07-07T06:04:03.856593070Z" level=info msg="connecting to shim 8a0c0526cf7ed8c64d53d1dbbc9027fc649c877ce3be2eb6e7513f1253346cf4" address="unix:///run/containerd/s/41430e76d376c44a0ecd4428efb14aadd1751f79bb5558e9e0892dd5a6599fda" protocol=ttrpc version=3 Jul 7 06:04:03.899059 systemd[1]: Started cri-containerd-8a0c0526cf7ed8c64d53d1dbbc9027fc649c877ce3be2eb6e7513f1253346cf4.scope - libcontainer container 8a0c0526cf7ed8c64d53d1dbbc9027fc649c877ce3be2eb6e7513f1253346cf4. Jul 7 06:04:03.915048 containerd[1584]: time="2025-07-07T06:04:03.914969291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\" id:\"409e38e02590ee1d68d373d2eb23d315e7e7c7c61694786d2f7db095b58af566\" pid:5507 exited_at:{seconds:1751868243 nanos:914379032}" Jul 7 06:04:03.995008 containerd[1584]: time="2025-07-07T06:04:03.994950370Z" level=info msg="StartContainer for \"8a0c0526cf7ed8c64d53d1dbbc9027fc649c877ce3be2eb6e7513f1253346cf4\" returns successfully" Jul 7 06:04:04.098390 kubelet[2777]: E0707 06:04:04.098069 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:04.506743 kubelet[2777]: I0707 06:04:04.506513 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cdbf87994-t4x5q" podStartSLOduration=3.628870324 podStartE2EDuration="32.506490396s" podCreationTimestamp="2025-07-07 06:03:32 +0000 UTC" firstStartedPulling="2025-07-07 06:03:34.130526578 +0000 UTC m=+49.158377650" lastFinishedPulling="2025-07-07 06:04:03.00814665 +0000 UTC m=+78.035997722" observedRunningTime="2025-07-07 06:04:04.505571874 +0000 UTC m=+79.533422946" watchObservedRunningTime="2025-07-07 06:04:04.506490396 +0000 UTC m=+79.534341469" Jul 7 06:04:04.733495 kubelet[2777]: I0707 06:04:04.733368 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86dc8db98-qlhjq" podStartSLOduration=46.566180875 podStartE2EDuration="1m5.733342534s" podCreationTimestamp="2025-07-07 06:02:59 +0000 UTC" firstStartedPulling="2025-07-07 06:03:44.638255612 +0000 UTC m=+59.666106684" lastFinishedPulling="2025-07-07 06:04:03.805417271 +0000 UTC m=+78.833268343" observedRunningTime="2025-07-07 06:04:04.731028434 +0000 UTC m=+79.758879506" watchObservedRunningTime="2025-07-07 06:04:04.733342534 +0000 UTC m=+79.761193606" Jul 7 06:04:04.858480 systemd[1]: Started sshd@11-10.0.0.25:22-10.0.0.1:57172.service - OpenSSH per-connection server daemon (10.0.0.1:57172). Jul 7 06:04:04.935466 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 57172 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:04.938870 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:04.943978 systemd-logind[1565]: New session 12 of user core. Jul 7 06:04:04.955087 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:04:05.097719 kubelet[2777]: E0707 06:04:05.097654 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:05.241784 sshd[5564]: Connection closed by 10.0.0.1 port 57172 Jul 7 06:04:05.244042 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:05.253460 systemd[1]: sshd@11-10.0.0.25:22-10.0.0.1:57172.service: Deactivated successfully. Jul 7 06:04:05.256123 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:04:05.258988 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:04:05.263866 systemd[1]: Started sshd@12-10.0.0.25:22-10.0.0.1:57174.service - OpenSSH per-connection server daemon (10.0.0.1:57174). Jul 7 06:04:05.264886 systemd-logind[1565]: Removed session 12. Jul 7 06:04:05.343359 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 57174 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:05.345331 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:05.352282 systemd-logind[1565]: New session 13 of user core. Jul 7 06:04:05.357199 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:04:05.577414 sshd[5581]: Connection closed by 10.0.0.1 port 57174 Jul 7 06:04:05.578189 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:05.593430 systemd[1]: sshd@12-10.0.0.25:22-10.0.0.1:57174.service: Deactivated successfully. Jul 7 06:04:05.597392 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:04:05.599439 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:04:05.604535 systemd[1]: Started sshd@13-10.0.0.25:22-10.0.0.1:57176.service - OpenSSH per-connection server daemon (10.0.0.1:57176). Jul 7 06:04:05.606779 systemd-logind[1565]: Removed session 13. Jul 7 06:04:05.667830 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 57176 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:05.670351 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:05.679026 systemd-logind[1565]: New session 14 of user core. Jul 7 06:04:05.687149 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:04:05.847494 sshd[5596]: Connection closed by 10.0.0.1 port 57176 Jul 7 06:04:05.847843 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:05.854452 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:04:05.855015 systemd[1]: sshd@13-10.0.0.25:22-10.0.0.1:57176.service: Deactivated successfully. Jul 7 06:04:05.857939 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:04:05.861770 systemd-logind[1565]: Removed session 14. Jul 7 06:04:10.097696 kubelet[2777]: E0707 06:04:10.097620 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:10.863304 systemd[1]: Started sshd@14-10.0.0.25:22-10.0.0.1:42708.service - OpenSSH per-connection server daemon (10.0.0.1:42708). Jul 7 06:04:10.957938 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 42708 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:10.960071 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:10.969019 systemd-logind[1565]: New session 15 of user core. Jul 7 06:04:10.978939 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:04:11.369692 sshd[5623]: Connection closed by 10.0.0.1 port 42708 Jul 7 06:04:11.369284 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:11.376489 systemd[1]: sshd@14-10.0.0.25:22-10.0.0.1:42708.service: Deactivated successfully. Jul 7 06:04:11.379636 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:04:11.382161 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:04:11.383527 systemd-logind[1565]: Removed session 15. Jul 7 06:04:16.394236 systemd[1]: Started sshd@15-10.0.0.25:22-10.0.0.1:42720.service - OpenSSH per-connection server daemon (10.0.0.1:42720). Jul 7 06:04:16.447361 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 42720 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:16.449646 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:16.454629 systemd-logind[1565]: New session 16 of user core. Jul 7 06:04:16.461946 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:04:16.599670 sshd[5644]: Connection closed by 10.0.0.1 port 42720 Jul 7 06:04:16.600019 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:16.605949 systemd[1]: sshd@15-10.0.0.25:22-10.0.0.1:42720.service: Deactivated successfully. Jul 7 06:04:16.608757 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:04:16.609809 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:04:16.611450 systemd-logind[1565]: Removed session 16. Jul 7 06:04:20.098168 kubelet[2777]: E0707 06:04:20.098095 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.407996 containerd[1584]: time="2025-07-07T06:04:20.407694924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061\" id:\"01e67f6f862af616a9846bf586f1853fd5847fd35519aa61a49c5fe208e1d29e\" pid:5669 exited_at:{seconds:1751868260 nanos:407431796}" Jul 7 06:04:21.615499 systemd[1]: Started sshd@16-10.0.0.25:22-10.0.0.1:57226.service - OpenSSH per-connection server daemon (10.0.0.1:57226). Jul 7 06:04:21.674636 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 57226 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:21.676520 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:21.681740 systemd-logind[1565]: New session 17 of user core. Jul 7 06:04:21.693041 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:04:21.814659 sshd[5684]: Connection closed by 10.0.0.1 port 57226 Jul 7 06:04:21.815021 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:21.819219 systemd[1]: sshd@16-10.0.0.25:22-10.0.0.1:57226.service: Deactivated successfully. Jul 7 06:04:21.821399 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:04:21.822345 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:04:21.823714 systemd-logind[1565]: Removed session 17. Jul 7 06:04:25.489069 containerd[1584]: time="2025-07-07T06:04:25.489006891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38\" id:\"a5ae5709b466ba66d06cebef1425dca4e54f8f1ce7e6109f30939f552ffec113\" pid:5708 exited_at:{seconds:1751868265 nanos:488324021}" Jul 7 06:04:26.836879 systemd[1]: Started sshd@17-10.0.0.25:22-10.0.0.1:57228.service - OpenSSH per-connection server daemon (10.0.0.1:57228). Jul 7 06:04:26.902630 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 57228 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:26.904572 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:26.912102 systemd-logind[1565]: New session 18 of user core. Jul 7 06:04:26.922094 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:04:27.073233 sshd[5727]: Connection closed by 10.0.0.1 port 57228 Jul 7 06:04:27.073547 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:27.078164 systemd[1]: sshd@17-10.0.0.25:22-10.0.0.1:57228.service: Deactivated successfully. Jul 7 06:04:27.080474 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:04:27.081301 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:04:27.082981 systemd-logind[1565]: Removed session 18. Jul 7 06:04:32.091670 systemd[1]: Started sshd@18-10.0.0.25:22-10.0.0.1:58508.service - OpenSSH per-connection server daemon (10.0.0.1:58508). Jul 7 06:04:32.147429 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 58508 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:32.149555 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:32.155444 systemd-logind[1565]: New session 19 of user core. Jul 7 06:04:32.164049 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:04:32.329498 sshd[5744]: Connection closed by 10.0.0.1 port 58508 Jul 7 06:04:32.329942 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:32.341276 systemd[1]: sshd@18-10.0.0.25:22-10.0.0.1:58508.service: Deactivated successfully. Jul 7 06:04:32.343659 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:04:32.344766 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:04:32.348860 systemd[1]: Started sshd@19-10.0.0.25:22-10.0.0.1:58510.service - OpenSSH per-connection server daemon (10.0.0.1:58510). Jul 7 06:04:32.349578 systemd-logind[1565]: Removed session 19. Jul 7 06:04:32.413051 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 58510 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:32.415110 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:32.420164 systemd-logind[1565]: New session 20 of user core. Jul 7 06:04:32.430030 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:04:32.839704 sshd[5759]: Connection closed by 10.0.0.1 port 58510 Jul 7 06:04:32.840114 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:32.851374 systemd[1]: sshd@19-10.0.0.25:22-10.0.0.1:58510.service: Deactivated successfully. Jul 7 06:04:32.853881 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:04:32.854771 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:04:32.858669 systemd[1]: Started sshd@20-10.0.0.25:22-10.0.0.1:58514.service - OpenSSH per-connection server daemon (10.0.0.1:58514). Jul 7 06:04:32.859681 systemd-logind[1565]: Removed session 20. Jul 7 06:04:32.930828 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 58514 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:32.932698 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:32.938031 systemd-logind[1565]: New session 21 of user core. Jul 7 06:04:32.951121 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:04:34.003118 containerd[1584]: time="2025-07-07T06:04:34.003044710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e73032d5e92579119eab4a262ccf8183b724836bd85be9c9d849f0e1fd2afb\" id:\"3d35ae51d950dfde38036b381cda3a3290ffdbf259ef214780ac85f9298f9245\" pid:5814 exited_at:{seconds:1751868274 nanos:1994358}" Jul 7 06:04:34.097974 kubelet[2777]: E0707 06:04:34.097917 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:34.205145 sshd[5773]: Connection closed by 10.0.0.1 port 58514 Jul 7 06:04:34.205946 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:34.220364 systemd[1]: sshd@20-10.0.0.25:22-10.0.0.1:58514.service: Deactivated successfully. Jul 7 06:04:34.224341 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:04:34.226120 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:04:34.232244 systemd[1]: Started sshd@21-10.0.0.25:22-10.0.0.1:58518.service - OpenSSH per-connection server daemon (10.0.0.1:58518). Jul 7 06:04:34.234557 systemd-logind[1565]: Removed session 21. Jul 7 06:04:34.295022 sshd[5835]: Accepted publickey for core from 10.0.0.1 port 58518 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:34.296933 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:34.302769 systemd-logind[1565]: New session 22 of user core. Jul 7 06:04:34.321155 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:04:34.728096 sshd[5837]: Connection closed by 10.0.0.1 port 58518 Jul 7 06:04:34.728615 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:34.743339 systemd[1]: sshd@21-10.0.0.25:22-10.0.0.1:58518.service: Deactivated successfully. Jul 7 06:04:34.745957 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:04:34.746807 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:04:34.751022 systemd[1]: Started sshd@22-10.0.0.25:22-10.0.0.1:58526.service - OpenSSH per-connection server daemon (10.0.0.1:58526). Jul 7 06:04:34.751882 systemd-logind[1565]: Removed session 22. Jul 7 06:04:34.808535 sshd[5849]: Accepted publickey for core from 10.0.0.1 port 58526 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:34.810469 sshd-session[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:34.815450 systemd-logind[1565]: New session 23 of user core. Jul 7 06:04:34.823966 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:04:34.956852 sshd[5851]: Connection closed by 10.0.0.1 port 58526 Jul 7 06:04:34.957201 sshd-session[5849]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:34.962158 systemd[1]: sshd@22-10.0.0.25:22-10.0.0.1:58526.service: Deactivated successfully. Jul 7 06:04:34.964536 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:04:34.965332 systemd-logind[1565]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:04:34.966564 systemd-logind[1565]: Removed session 23. Jul 7 06:04:39.973067 systemd[1]: Started sshd@23-10.0.0.25:22-10.0.0.1:35206.service - OpenSSH per-connection server daemon (10.0.0.1:35206). Jul 7 06:04:40.029011 sshd[5864]: Accepted publickey for core from 10.0.0.1 port 35206 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:40.031123 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:40.036450 systemd-logind[1565]: New session 24 of user core. Jul 7 06:04:40.042936 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:04:40.454880 sshd[5866]: Connection closed by 10.0.0.1 port 35206 Jul 7 06:04:40.455265 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:40.460531 systemd[1]: sshd@23-10.0.0.25:22-10.0.0.1:35206.service: Deactivated successfully. Jul 7 06:04:40.463581 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:04:40.464501 systemd-logind[1565]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:04:40.466094 systemd-logind[1565]: Removed session 24. Jul 7 06:04:45.472336 systemd[1]: Started sshd@24-10.0.0.25:22-10.0.0.1:35212.service - OpenSSH per-connection server daemon (10.0.0.1:35212). Jul 7 06:04:45.536294 sshd[5883]: Accepted publickey for core from 10.0.0.1 port 35212 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:45.538905 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:45.545130 systemd-logind[1565]: New session 25 of user core. Jul 7 06:04:45.557131 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:04:45.679743 sshd[5885]: Connection closed by 10.0.0.1 port 35212 Jul 7 06:04:45.680187 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:45.685569 systemd[1]: sshd@24-10.0.0.25:22-10.0.0.1:35212.service: Deactivated successfully. Jul 7 06:04:45.687950 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:04:45.689011 systemd-logind[1565]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:04:45.690367 systemd-logind[1565]: Removed session 25. Jul 7 06:04:49.832664 containerd[1584]: time="2025-07-07T06:04:49.832601148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaecbc794b3bc892ac62a7820e0a5ed39bca8c8d37398ff57d3538022bddfc38\" id:\"b41aad123f2f46d074ec712a94f93a186201df01621b97c8efec11c2e636a3d8\" pid:5911 exited_at:{seconds:1751868289 nanos:832146140}" Jul 7 06:04:50.407218 containerd[1584]: time="2025-07-07T06:04:50.407079634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83af57081af10082befaf8f583a9387a82fae184b9dcefe09ae3f232de86d061\" id:\"9d554eeed23da6b0a3400d9d8cf443dd8e187c9452cad6ccf09c8cf359d58dcd\" pid:5935 exited_at:{seconds:1751868290 nanos:406674250}" Jul 7 06:04:50.694221 systemd[1]: Started sshd@25-10.0.0.25:22-10.0.0.1:45890.service - OpenSSH per-connection server daemon (10.0.0.1:45890). Jul 7 06:04:50.777374 sshd[5946]: Accepted publickey for core from 10.0.0.1 port 45890 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:04:50.779618 sshd-session[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:50.785017 systemd-logind[1565]: New session 26 of user core. Jul 7 06:04:50.801102 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:04:51.076263 sshd[5948]: Connection closed by 10.0.0.1 port 45890 Jul 7 06:04:51.077165 sshd-session[5946]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:51.088894 systemd[1]: sshd@25-10.0.0.25:22-10.0.0.1:45890.service: Deactivated successfully. Jul 7 06:04:51.095441 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:04:51.099499 systemd-logind[1565]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:04:51.105954 systemd-logind[1565]: Removed session 26.