Aug 13 00:15:58.838732 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:15:58.838759 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:15:58.838773 kernel: BIOS-provided physical RAM map: Aug 13 00:15:58.838783 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:15:58.838792 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:15:58.838801 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:15:58.838812 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:15:58.838821 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:15:58.838838 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Aug 13 00:15:58.838847 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Aug 13 00:15:58.838857 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Aug 13 00:15:58.838866 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Aug 13 00:15:58.838875 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Aug 13 00:15:58.838884 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Aug 13 00:15:58.838899 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Aug 13 00:15:58.838909 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:15:58.838921 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Aug 13 00:15:58.838931 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Aug 13 00:15:58.838941 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Aug 13 00:15:58.838951 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Aug 13 00:15:58.838961 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Aug 13 00:15:58.838995 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:15:58.839005 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 13 00:15:58.839015 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:15:58.839025 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Aug 13 00:15:58.839038 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:15:58.839048 kernel: NX (Execute Disable) protection: active Aug 13 00:15:58.839057 kernel: APIC: Static calls initialized Aug 13 00:15:58.839067 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Aug 13 00:15:58.839077 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Aug 13 00:15:58.839087 kernel: extended physical RAM map: Aug 13 00:15:58.839097 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:15:58.839107 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 00:15:58.839117 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 00:15:58.839127 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 00:15:58.839137 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 00:15:58.839149 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Aug 13 00:15:58.839159 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Aug 13 00:15:58.839169 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Aug 13 00:15:58.839179 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Aug 13 00:15:58.839193 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Aug 13 00:15:58.839203 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Aug 13 00:15:58.839216 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Aug 13 00:15:58.839227 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Aug 13 00:15:58.839237 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Aug 13 00:15:58.839256 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Aug 13 00:15:58.839266 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Aug 13 00:15:58.839277 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 00:15:58.839287 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Aug 13 00:15:58.839298 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Aug 13 00:15:58.839308 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Aug 13 00:15:58.839321 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Aug 13 00:15:58.839332 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Aug 13 00:15:58.839342 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 00:15:58.839353 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 13 00:15:58.839363 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:15:58.839373 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Aug 13 00:15:58.839384 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:15:58.839397 kernel: efi: EFI v2.7 by EDK II Aug 13 00:15:58.839408 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Aug 13 00:15:58.839418 kernel: random: crng init done Aug 13 00:15:58.839432 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Aug 13 00:15:58.839442 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Aug 13 00:15:58.839457 kernel: secureboot: Secure boot disabled Aug 13 00:15:58.839468 kernel: SMBIOS 2.8 present. Aug 13 00:15:58.839478 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Aug 13 00:15:58.839488 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:15:58.839499 kernel: Hypervisor detected: KVM Aug 13 00:15:58.839509 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:15:58.839520 kernel: kvm-clock: using sched offset of 5056927437 cycles Aug 13 00:15:58.839531 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:15:58.839542 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:15:58.839552 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:15:58.839566 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:15:58.839576 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Aug 13 00:15:58.839587 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 00:15:58.839598 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:15:58.839608 kernel: Using GB pages for direct mapping Aug 13 00:15:58.839619 kernel: ACPI: Early table checksum verification disabled Aug 13 00:15:58.839630 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 00:15:58.839641 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:15:58.839651 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:58.839665 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:58.839675 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 00:15:58.839686 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:58.839697 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:58.839708 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:58.839719 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:58.839729 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:15:58.839740 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 00:15:58.839751 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 00:15:58.839764 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 00:15:58.839775 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 00:15:58.839785 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 00:15:58.839796 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 00:15:58.839805 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 00:15:58.839815 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 00:15:58.839824 kernel: No NUMA configuration found Aug 13 00:15:58.839835 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Aug 13 00:15:58.839845 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Aug 13 00:15:58.839859 kernel: Zone ranges: Aug 13 00:15:58.839870 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:15:58.839880 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Aug 13 00:15:58.839891 kernel: Normal empty Aug 13 00:15:58.839902 kernel: Device empty Aug 13 00:15:58.839912 kernel: Movable zone start for each node Aug 13 00:15:58.839923 kernel: Early memory node ranges Aug 13 00:15:58.839934 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:15:58.839944 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 00:15:58.839958 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 00:15:58.839987 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Aug 13 00:15:58.839998 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Aug 13 00:15:58.840008 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Aug 13 00:15:58.840019 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Aug 13 00:15:58.840030 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Aug 13 00:15:58.840040 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Aug 13 00:15:58.840054 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:15:58.840065 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:15:58.840087 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 00:15:58.840098 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:15:58.840109 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Aug 13 00:15:58.840120 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Aug 13 00:15:58.840133 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 00:15:58.840145 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Aug 13 00:15:58.840156 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Aug 13 00:15:58.840167 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:15:58.840178 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:15:58.840192 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:15:58.840203 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:15:58.840215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:15:58.840226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:15:58.840237 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:15:58.840257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:15:58.840269 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:15:58.840280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:15:58.840290 kernel: TSC deadline timer available Aug 13 00:15:58.840305 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:15:58.840316 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:15:58.840327 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:15:58.840338 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:15:58.840349 kernel: CPU topo: Num. cores per package: 4 Aug 13 00:15:58.840360 kernel: CPU topo: Num. threads per package: 4 Aug 13 00:15:58.840371 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Aug 13 00:15:58.840383 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:15:58.840394 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:15:58.840405 kernel: kvm-guest: setup PV sched yield Aug 13 00:15:58.840419 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Aug 13 00:15:58.840430 kernel: Booting paravirtualized kernel on KVM Aug 13 00:15:58.840442 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:15:58.840453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:15:58.840465 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Aug 13 00:15:58.840476 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Aug 13 00:15:58.840487 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:15:58.840498 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:15:58.840510 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:15:58.840524 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:15:58.840539 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:15:58.840550 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:15:58.840560 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:15:58.840571 kernel: Fallback order for Node 0: 0 Aug 13 00:15:58.840581 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Aug 13 00:15:58.840591 kernel: Policy zone: DMA32 Aug 13 00:15:58.840601 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:15:58.840616 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:15:58.840628 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:15:58.840638 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:15:58.840649 kernel: Dynamic Preempt: voluntary Aug 13 00:15:58.840659 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:15:58.840669 kernel: rcu: RCU event tracing is enabled. Aug 13 00:15:58.840680 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:15:58.840690 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:15:58.840699 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:15:58.840713 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:15:58.840723 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:15:58.840736 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:15:58.840746 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:15:58.840756 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:15:58.840767 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:15:58.840777 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:15:58.840788 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:15:58.840798 kernel: Console: colour dummy device 80x25 Aug 13 00:15:58.840811 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:15:58.840821 kernel: ACPI: Core revision 20240827 Aug 13 00:15:58.840832 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:15:58.840842 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:15:58.840852 kernel: x2apic enabled Aug 13 00:15:58.840863 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:15:58.840873 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:15:58.840883 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:15:58.840893 kernel: kvm-guest: setup PV IPIs Aug 13 00:15:58.840906 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:15:58.840917 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 13 00:15:58.840928 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:15:58.840939 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:15:58.840949 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:15:58.840960 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:15:58.840984 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:15:58.840995 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:15:58.841006 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:15:58.841020 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:15:58.841030 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:15:58.841041 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:15:58.841055 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:15:58.841066 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:15:58.841077 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:15:58.841088 kernel: x86/bugs: return thunk changed Aug 13 00:15:58.841098 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:15:58.841112 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:15:58.841122 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:15:58.841133 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:15:58.841143 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:15:58.841154 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 00:15:58.841165 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:15:58.841175 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:15:58.841186 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:15:58.841196 kernel: landlock: Up and running. Aug 13 00:15:58.841209 kernel: SELinux: Initializing. Aug 13 00:15:58.841220 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:15:58.841230 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:15:58.841251 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:15:58.841261 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:15:58.841272 kernel: ... version: 0 Aug 13 00:15:58.841283 kernel: ... bit width: 48 Aug 13 00:15:58.841293 kernel: ... generic registers: 6 Aug 13 00:15:58.841304 kernel: ... value mask: 0000ffffffffffff Aug 13 00:15:58.841317 kernel: ... max period: 00007fffffffffff Aug 13 00:15:58.841327 kernel: ... fixed-purpose events: 0 Aug 13 00:15:58.841338 kernel: ... event mask: 000000000000003f Aug 13 00:15:58.841348 kernel: signal: max sigframe size: 1776 Aug 13 00:15:58.841359 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:15:58.841370 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:15:58.841384 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:15:58.841394 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:15:58.841405 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:15:58.841418 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 00:15:58.841428 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:15:58.841439 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:15:58.841450 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 137196K reserved, 0K cma-reserved) Aug 13 00:15:58.841461 kernel: devtmpfs: initialized Aug 13 00:15:58.841472 kernel: x86/mm: Memory block size: 128MB Aug 13 00:15:58.841482 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 00:15:58.841493 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 00:15:58.841503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Aug 13 00:15:58.841516 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 00:15:58.841526 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Aug 13 00:15:58.841537 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 00:15:58.841547 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:15:58.841558 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:15:58.841568 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:15:58.841578 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:15:58.841589 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:15:58.841599 kernel: audit: type=2000 audit(1755044156.808:1): state=initialized audit_enabled=0 res=1 Aug 13 00:15:58.841612 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:15:58.841622 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:15:58.841632 kernel: cpuidle: using governor menu Aug 13 00:15:58.841643 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:15:58.841654 kernel: dca service started, version 1.12.1 Aug 13 00:15:58.841664 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Aug 13 00:15:58.841675 kernel: PCI: Using configuration type 1 for base access Aug 13 00:15:58.841685 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:15:58.841696 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:15:58.841709 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:15:58.841720 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:15:58.841730 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:15:58.841740 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:15:58.841751 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:15:58.841761 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:15:58.841771 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:15:58.841782 kernel: ACPI: Interpreter enabled Aug 13 00:15:58.841792 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:15:58.841805 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:15:58.841815 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:15:58.841826 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:15:58.841836 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:15:58.841847 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:15:58.842076 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:15:58.842224 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:15:58.842544 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:15:58.842583 kernel: PCI host bridge to bus 0000:00 Aug 13 00:15:58.842729 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:15:58.842844 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:15:58.843008 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:15:58.843123 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Aug 13 00:15:58.843234 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Aug 13 00:15:58.843355 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Aug 13 00:15:58.843492 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:15:58.843677 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:15:58.843810 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:15:58.843930 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Aug 13 00:15:58.844092 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Aug 13 00:15:58.844213 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Aug 13 00:15:58.844347 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:15:58.844478 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 13 00:15:58.844636 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Aug 13 00:15:58.844761 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Aug 13 00:15:58.844917 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Aug 13 00:15:58.845067 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:15:58.845190 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Aug 13 00:15:58.845328 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Aug 13 00:15:58.845449 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Aug 13 00:15:58.845586 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:15:58.845713 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Aug 13 00:15:58.845833 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Aug 13 00:15:58.845996 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Aug 13 00:15:58.846126 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Aug 13 00:15:58.846273 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:15:58.846404 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:15:58.846534 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 00:15:58.846654 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Aug 13 00:15:58.846771 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Aug 13 00:15:58.846898 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 00:15:58.847056 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Aug 13 00:15:58.847069 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:15:58.847077 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:15:58.847086 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:15:58.847094 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:15:58.847101 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:15:58.847109 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:15:58.847118 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:15:58.847129 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:15:58.847138 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:15:58.847146 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:15:58.847154 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:15:58.847162 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:15:58.847170 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:15:58.847178 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:15:58.847186 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:15:58.847193 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:15:58.847204 kernel: iommu: Default domain type: Translated Aug 13 00:15:58.847212 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:15:58.847220 kernel: efivars: Registered efivars operations Aug 13 00:15:58.847228 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:15:58.847236 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:15:58.847255 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 00:15:58.847263 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Aug 13 00:15:58.847271 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Aug 13 00:15:58.847279 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Aug 13 00:15:58.847289 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Aug 13 00:15:58.847297 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Aug 13 00:15:58.847305 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Aug 13 00:15:58.847313 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Aug 13 00:15:58.847435 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:15:58.847554 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:15:58.847697 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:15:58.847718 kernel: vgaarb: loaded Aug 13 00:15:58.847731 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:15:58.847739 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:15:58.847747 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:15:58.847755 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:15:58.847763 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:15:58.847771 kernel: pnp: PnP ACPI init Aug 13 00:15:58.847999 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Aug 13 00:15:58.848029 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:15:58.848041 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:15:58.848050 kernel: NET: Registered PF_INET protocol family Aug 13 00:15:58.848058 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:15:58.848066 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:15:58.848075 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:15:58.848083 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:15:58.848092 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:15:58.848100 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:15:58.848108 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:15:58.848119 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:15:58.848127 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:15:58.848135 kernel: NET: Registered PF_XDP protocol family Aug 13 00:15:58.848269 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Aug 13 00:15:58.848392 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Aug 13 00:15:58.848503 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:15:58.848611 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:15:58.848719 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:15:58.848831 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Aug 13 00:15:58.848939 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Aug 13 00:15:58.849065 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Aug 13 00:15:58.849077 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:15:58.849085 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 13 00:15:58.849094 kernel: Initialise system trusted keyrings Aug 13 00:15:58.849102 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:15:58.849110 kernel: Key type asymmetric registered Aug 13 00:15:58.849118 kernel: Asymmetric key parser 'x509' registered Aug 13 00:15:58.849130 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:15:58.849138 kernel: io scheduler mq-deadline registered Aug 13 00:15:58.849149 kernel: io scheduler kyber registered Aug 13 00:15:58.849166 kernel: io scheduler bfq registered Aug 13 00:15:58.849185 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:15:58.849199 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:15:58.849217 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:15:58.849225 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:15:58.849234 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:15:58.849252 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:15:58.849261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:15:58.849269 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:15:58.849278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:15:58.849423 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:15:58.849441 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:15:58.849560 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:15:58.849672 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:15:58 UTC (1755044158) Aug 13 00:15:58.849786 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Aug 13 00:15:58.849797 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:15:58.849805 kernel: efifb: probing for efifb Aug 13 00:15:58.849813 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 13 00:15:58.849822 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 13 00:15:58.849833 kernel: efifb: scrolling: redraw Aug 13 00:15:58.849842 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:15:58.849850 kernel: Console: switching to colour frame buffer device 160x50 Aug 13 00:15:58.849858 kernel: fb0: EFI VGA frame buffer device Aug 13 00:15:58.849867 kernel: pstore: Using crash dump compression: deflate Aug 13 00:15:58.849875 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 00:15:58.849883 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:15:58.849892 kernel: Segment Routing with IPv6 Aug 13 00:15:58.849900 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:15:58.849911 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:15:58.849919 kernel: Key type dns_resolver registered Aug 13 00:15:58.849927 kernel: IPI shorthand broadcast: enabled Aug 13 00:15:58.849935 kernel: sched_clock: Marking stable (2881002359, 157006432)->(3062792064, -24783273) Aug 13 00:15:58.849944 kernel: registered taskstats version 1 Aug 13 00:15:58.849952 kernel: Loading compiled-in X.509 certificates Aug 13 00:15:58.849960 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:15:58.849984 kernel: Demotion targets for Node 0: null Aug 13 00:15:58.849992 kernel: Key type .fscrypt registered Aug 13 00:15:58.850003 kernel: Key type fscrypt-provisioning registered Aug 13 00:15:58.850011 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:15:58.850019 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:15:58.850028 kernel: ima: No architecture policies found Aug 13 00:15:58.850036 kernel: clk: Disabling unused clocks Aug 13 00:15:58.850044 kernel: Warning: unable to open an initial console. Aug 13 00:15:58.850053 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:15:58.850061 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:15:58.850069 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:15:58.850080 kernel: Run /init as init process Aug 13 00:15:58.850088 kernel: with arguments: Aug 13 00:15:58.850097 kernel: /init Aug 13 00:15:58.850105 kernel: with environment: Aug 13 00:15:58.850112 kernel: HOME=/ Aug 13 00:15:58.850120 kernel: TERM=linux Aug 13 00:15:58.850129 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:15:58.850138 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:15:58.850151 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:15:58.850162 systemd[1]: Detected virtualization kvm. Aug 13 00:15:58.850171 systemd[1]: Detected architecture x86-64. Aug 13 00:15:58.850179 systemd[1]: Running in initrd. Aug 13 00:15:58.850188 systemd[1]: No hostname configured, using default hostname. Aug 13 00:15:58.850197 systemd[1]: Hostname set to . Aug 13 00:15:58.850206 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:15:58.850215 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:15:58.850225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:15:58.850234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:15:58.850254 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:15:58.850263 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:15:58.850273 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:15:58.850282 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:15:58.850293 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:15:58.850304 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:15:58.850313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:15:58.850322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:15:58.850331 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:15:58.850339 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:15:58.850348 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:15:58.850357 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:15:58.850366 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:15:58.850377 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:15:58.850388 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:15:58.850396 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:15:58.850405 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:15:58.850414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:15:58.850426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:15:58.850438 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:15:58.850449 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:15:58.850461 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:15:58.850475 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:15:58.850484 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:15:58.850493 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:15:58.850502 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:15:58.850510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:15:58.850519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:58.850528 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:15:58.850540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:15:58.850576 systemd-journald[219]: Collecting audit messages is disabled. Aug 13 00:15:58.850600 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:15:58.850610 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:15:58.850620 systemd-journald[219]: Journal started Aug 13 00:15:58.850640 systemd-journald[219]: Runtime Journal (/run/log/journal/91081bba270148488e3ae561e0cfb4a0) is 6M, max 48.5M, 42.4M free. Aug 13 00:15:58.846006 systemd-modules-load[221]: Inserted module 'overlay' Aug 13 00:15:58.854264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:58.857991 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:15:58.860210 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:15:58.863732 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:15:58.867474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:15:58.871598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:15:58.877019 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:15:58.879571 systemd-modules-load[221]: Inserted module 'br_netfilter' Aug 13 00:15:58.880746 kernel: Bridge firewalling registered Aug 13 00:15:58.881147 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:15:58.882456 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:15:58.883116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:15:58.888080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:15:58.892181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:15:58.900233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:15:58.900533 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:15:58.905036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:15:58.907534 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:15:58.929685 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:15:58.963023 systemd-resolved[261]: Positive Trust Anchors: Aug 13 00:15:58.963043 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:15:58.963079 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:15:58.966111 systemd-resolved[261]: Defaulting to hostname 'linux'. Aug 13 00:15:58.967392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:15:58.972754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:15:59.121992 kernel: SCSI subsystem initialized Aug 13 00:15:59.130993 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:15:59.144002 kernel: iscsi: registered transport (tcp) Aug 13 00:15:59.172306 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:15:59.172331 kernel: QLogic iSCSI HBA Driver Aug 13 00:15:59.193900 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:15:59.210629 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:15:59.211096 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:15:59.277844 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:15:59.282571 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:15:59.361039 kernel: raid6: avx2x4 gen() 18451 MB/s Aug 13 00:15:59.378022 kernel: raid6: avx2x2 gen() 24881 MB/s Aug 13 00:15:59.395222 kernel: raid6: avx2x1 gen() 21314 MB/s Aug 13 00:15:59.395277 kernel: raid6: using algorithm avx2x2 gen() 24881 MB/s Aug 13 00:15:59.413283 kernel: raid6: .... xor() 10714 MB/s, rmw enabled Aug 13 00:15:59.413314 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:15:59.437029 kernel: xor: automatically using best checksumming function avx Aug 13 00:15:59.609019 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:15:59.618823 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:15:59.622749 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:15:59.653123 systemd-udevd[472]: Using default interface naming scheme 'v255'. Aug 13 00:15:59.658749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:15:59.659743 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:15:59.680612 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Aug 13 00:15:59.710485 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:15:59.713069 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:15:59.789393 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:15:59.792502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:15:59.828009 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 00:15:59.831141 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:15:59.836147 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:15:59.836187 kernel: GPT:9289727 != 19775487 Aug 13 00:15:59.836199 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:15:59.836209 kernel: GPT:9289727 != 19775487 Aug 13 00:15:59.836225 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:15:59.836236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:15:59.847992 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:15:59.856989 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:15:59.866992 kernel: AES CTR mode by8 optimization enabled Aug 13 00:15:59.868010 kernel: libata version 3.00 loaded. Aug 13 00:15:59.872021 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:15:59.872147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:59.876396 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:59.882416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:59.890336 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:15:59.895025 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:15:59.896996 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:15:59.902026 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 00:15:59.902207 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 00:15:59.902360 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:15:59.911992 kernel: scsi host0: ahci Aug 13 00:15:59.914052 kernel: scsi host1: ahci Aug 13 00:15:59.914240 kernel: scsi host2: ahci Aug 13 00:15:59.915055 kernel: scsi host3: ahci Aug 13 00:15:59.915986 kernel: scsi host4: ahci Aug 13 00:15:59.917031 kernel: scsi host5: ahci Aug 13 00:15:59.917206 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Aug 13 00:15:59.919747 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Aug 13 00:15:59.919768 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Aug 13 00:15:59.919779 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Aug 13 00:15:59.919789 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Aug 13 00:15:59.919800 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Aug 13 00:15:59.927425 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:15:59.937672 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:15:59.954882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:15:59.963828 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:15:59.963931 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:15:59.970488 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:15:59.970565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:15:59.970616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:59.975453 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:59.977452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:59.979232 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:16:00.006635 disk-uuid[634]: Primary Header is updated. Aug 13 00:16:00.006635 disk-uuid[634]: Secondary Entries is updated. Aug 13 00:16:00.006635 disk-uuid[634]: Secondary Header is updated. Aug 13 00:16:00.010997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:16:00.015001 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:16:00.014959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:16:00.236010 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:16:00.236085 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:16:00.236097 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:16:00.236108 kernel: ata3.00: applying bridge limits Aug 13 00:16:00.236118 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:16:00.237018 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:16:00.237993 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:16:00.238016 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:16:00.239002 kernel: ata3.00: configured for UDMA/100 Aug 13 00:16:00.240019 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:16:00.283633 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:16:00.283935 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:16:00.304037 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:16:00.743402 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:16:00.744380 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:16:00.746928 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:16:00.747218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:16:00.748706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:16:00.784650 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:16:01.018012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:16:01.018338 disk-uuid[637]: The operation has completed successfully. Aug 13 00:16:01.062101 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:16:01.062248 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:16:01.102383 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:16:01.127741 sh[668]: Success Aug 13 00:16:01.155225 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:16:01.155450 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:16:01.155474 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:16:01.171046 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 00:16:01.209331 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:16:01.212773 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:16:01.235396 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:16:01.240533 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:16:01.240569 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (680) Aug 13 00:16:01.241876 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:16:01.241897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:16:01.243341 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:16:01.248532 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:16:01.251259 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:16:01.253876 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:16:01.257106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:16:01.260235 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:16:01.286017 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Aug 13 00:16:01.286089 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:16:01.286997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:16:01.288383 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:16:01.295989 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:16:01.296864 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:16:01.300074 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:16:01.425887 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:16:01.429581 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:16:01.484199 systemd-networkd[850]: lo: Link UP Aug 13 00:16:01.484210 systemd-networkd[850]: lo: Gained carrier Aug 13 00:16:01.485765 systemd-networkd[850]: Enumeration completed Aug 13 00:16:01.485991 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:16:01.486150 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:16:01.490042 ignition[756]: Ignition 2.21.0 Aug 13 00:16:01.486154 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:16:01.490049 ignition[756]: Stage: fetch-offline Aug 13 00:16:01.486611 systemd-networkd[850]: eth0: Link UP Aug 13 00:16:01.490082 ignition[756]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:16:01.489720 systemd[1]: Reached target network.target - Network. Aug 13 00:16:01.490091 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:16:01.491146 systemd-networkd[850]: eth0: Gained carrier Aug 13 00:16:01.490188 ignition[756]: parsed url from cmdline: "" Aug 13 00:16:01.491155 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:16:01.490192 ignition[756]: no config URL provided Aug 13 00:16:01.490197 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:16:01.490206 ignition[756]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:16:01.490233 ignition[756]: op(1): [started] loading QEMU firmware config module Aug 13 00:16:01.511048 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:16:01.490238 ignition[756]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:16:01.502016 ignition[756]: op(1): [finished] loading QEMU firmware config module Aug 13 00:16:01.550459 ignition[756]: parsing config with SHA512: acad1367add8ed9dee92623db14927e855f1109ee36ebfcdd82d81c02aa413fb69777070ba7bb2e3905dc49f9a688dbbda1c7210af10d4ac4aa96cb2c66c6af3 Aug 13 00:16:01.557673 unknown[756]: fetched base config from "system" Aug 13 00:16:01.557688 unknown[756]: fetched user config from "qemu" Aug 13 00:16:01.558033 ignition[756]: fetch-offline: fetch-offline passed Aug 13 00:16:01.558094 ignition[756]: Ignition finished successfully Aug 13 00:16:01.562258 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:16:01.564720 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:16:01.566924 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:16:01.609472 ignition[863]: Ignition 2.21.0 Aug 13 00:16:01.609486 ignition[863]: Stage: kargs Aug 13 00:16:01.610425 ignition[863]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:16:01.611793 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:16:01.616576 ignition[863]: kargs: kargs passed Aug 13 00:16:01.616674 ignition[863]: Ignition finished successfully Aug 13 00:16:01.622542 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:16:01.624494 systemd-resolved[261]: Detected conflict on linux IN A 10.0.0.16 Aug 13 00:16:01.624519 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Aug 13 00:16:01.625569 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:16:01.679179 ignition[871]: Ignition 2.21.0 Aug 13 00:16:01.679195 ignition[871]: Stage: disks Aug 13 00:16:01.679403 ignition[871]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:16:01.679415 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:16:01.682995 ignition[871]: disks: disks passed Aug 13 00:16:01.683255 ignition[871]: Ignition finished successfully Aug 13 00:16:01.687513 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:16:01.687786 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:16:01.690448 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:16:01.692569 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:16:01.692782 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:16:01.693295 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:16:01.701770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:16:01.741891 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:16:01.749760 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:16:01.753965 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:16:01.872998 kernel: EXT4-fs (vda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:16:01.873175 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:16:01.873803 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:16:01.877145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:16:01.878133 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:16:01.879054 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:16:01.879094 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:16:01.879116 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:16:01.898779 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:16:01.901234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:16:01.904586 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Aug 13 00:16:01.906694 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:16:01.906723 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:16:01.906737 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:16:01.911212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:16:01.944736 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:16:01.948638 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:16:01.953480 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:16:01.958314 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:16:02.057252 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:16:02.059413 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:16:02.062026 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:16:02.089993 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:16:02.134873 ignition[1001]: INFO : Ignition 2.21.0 Aug 13 00:16:02.134873 ignition[1001]: INFO : Stage: mount Aug 13 00:16:02.137799 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:16:02.137799 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:16:02.140951 ignition[1001]: INFO : mount: mount passed Aug 13 00:16:02.140951 ignition[1001]: INFO : Ignition finished successfully Aug 13 00:16:02.145772 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:16:02.149641 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:16:02.242210 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:16:02.244154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:16:02.275999 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Aug 13 00:16:02.278422 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:16:02.278558 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:16:02.278570 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:16:02.286053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:16:02.343794 ignition[1028]: INFO : Ignition 2.21.0 Aug 13 00:16:02.343794 ignition[1028]: INFO : Stage: files Aug 13 00:16:02.346198 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:16:02.346198 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:16:02.350673 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:16:02.352406 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:16:02.352406 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:16:02.355348 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:16:02.355348 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:16:02.358322 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:16:02.358322 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:16:02.358322 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 00:16:02.355394 unknown[1028]: wrote ssh authorized keys file for user: core Aug 13 00:16:02.368586 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:16:02.401471 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:16:02.627821 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:16:02.629895 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:16:02.631757 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:16:02.631757 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:16:02.631757 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:16:02.631757 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:16:02.631757 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:16:02.631757 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:16:02.631757 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:16:02.643593 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:16:02.643593 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:16:02.643593 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:16:02.649503 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:16:02.649503 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:16:02.653924 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 00:16:03.067363 systemd-networkd[850]: eth0: Gained IPv6LL Aug 13 00:16:03.120142 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:16:03.808648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:16:03.808648 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:16:03.812815 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:16:03.817200 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:16:03.817200 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:16:03.817200 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 00:16:03.821794 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:16:03.821794 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:16:03.821794 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 00:16:03.821794 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:16:03.839748 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:16:03.844489 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:16:03.846035 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:16:03.846035 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:16:03.846035 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:16:03.846035 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:16:03.846035 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:16:03.846035 ignition[1028]: INFO : files: files passed Aug 13 00:16:03.846035 ignition[1028]: INFO : Ignition finished successfully Aug 13 00:16:03.854434 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:16:03.858264 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:16:03.860817 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:16:03.881406 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:16:03.881564 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:16:03.885755 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 00:16:03.889549 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:16:03.892368 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:16:03.892368 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:16:03.893926 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:16:03.894305 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:16:03.900308 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:16:03.967285 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:16:03.967423 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:16:03.968674 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:16:03.970803 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:16:03.972722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:16:03.973628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:16:03.992715 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:16:03.995533 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:16:04.017638 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:16:04.017825 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:16:04.021070 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:16:04.023300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:16:04.023431 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:16:04.027676 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:16:04.027814 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:16:04.030015 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:16:04.033116 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:16:04.034355 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:16:04.034720 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:16:04.035316 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:16:04.035710 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:16:04.036302 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:16:04.036691 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:16:04.037054 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:16:04.037536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:16:04.037651 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:16:04.051351 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:16:04.052457 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:16:04.052727 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:16:04.056536 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:16:04.057600 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:16:04.057723 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:16:04.063539 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:16:04.063684 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:16:04.064807 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:16:04.068679 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:16:04.073066 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:16:04.074496 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:16:04.076083 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:16:04.076732 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:16:04.076874 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:16:04.079698 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:16:04.079817 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:16:04.081683 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:16:04.082000 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:16:04.084990 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:16:04.085243 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:16:04.091616 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:16:04.094242 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:16:04.095747 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:16:04.096197 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:16:04.099047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:16:04.099619 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:16:04.110655 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:16:04.118156 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:16:04.149896 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:16:04.195050 ignition[1088]: INFO : Ignition 2.21.0 Aug 13 00:16:04.195050 ignition[1088]: INFO : Stage: umount Aug 13 00:16:04.197117 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:16:04.197117 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:16:04.200728 ignition[1088]: INFO : umount: umount passed Aug 13 00:16:04.201707 ignition[1088]: INFO : Ignition finished successfully Aug 13 00:16:04.204384 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:16:04.204520 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:16:04.206613 systemd[1]: Stopped target network.target - Network. Aug 13 00:16:04.207474 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:16:04.207549 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:16:04.207835 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:16:04.207895 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:16:04.208348 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:16:04.208416 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:16:04.208668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:16:04.208726 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:16:04.209162 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:16:04.209619 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:16:04.232965 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:16:04.233214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:16:04.237553 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:16:04.237838 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:16:04.238020 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:16:04.241937 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:16:04.242843 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:16:04.244327 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:16:04.244370 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:16:04.248570 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:16:04.249486 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:16:04.249552 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:16:04.251699 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:16:04.251748 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:16:04.255028 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:16:04.255100 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:16:04.255996 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:16:04.256051 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:16:04.259925 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:16:04.264421 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:16:04.264493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:16:04.285443 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:16:04.285672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:16:04.288269 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:16:04.288318 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:16:04.290730 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:16:04.290770 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:16:04.292762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:16:04.292821 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:16:04.295610 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:16:04.295666 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:16:04.299571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:16:04.299632 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:16:04.303906 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:16:04.308353 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:16:04.308485 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:16:04.312327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:16:04.312393 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:16:04.316476 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:16:04.316529 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:16:04.321071 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:16:04.321132 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:16:04.323611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:16:04.323659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:16:04.327461 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:16:04.327521 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 00:16:04.327565 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:16:04.327611 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:16:04.327923 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:16:04.331213 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:16:04.340621 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:16:04.340736 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:16:04.386772 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:16:04.386941 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:16:04.389109 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:16:04.390813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:16:04.390944 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:16:04.394156 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:16:04.420469 systemd[1]: Switching root. Aug 13 00:16:04.462518 systemd-journald[219]: Journal stopped Aug 13 00:16:06.028386 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Aug 13 00:16:06.028474 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:16:06.028495 kernel: SELinux: policy capability open_perms=1 Aug 13 00:16:06.028510 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:16:06.028538 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:16:06.028555 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:16:06.028580 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:16:06.028596 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:16:06.028611 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:16:06.028640 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:16:06.028661 kernel: audit: type=1403 audit(1755044165.070:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:16:06.028678 systemd[1]: Successfully loaded SELinux policy in 51.980ms. Aug 13 00:16:06.028710 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.340ms. Aug 13 00:16:06.028735 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:16:06.028764 systemd[1]: Detected virtualization kvm. Aug 13 00:16:06.028784 systemd[1]: Detected architecture x86-64. Aug 13 00:16:06.028799 systemd[1]: Detected first boot. Aug 13 00:16:06.028815 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:16:06.028832 zram_generator::config[1133]: No configuration found. Aug 13 00:16:06.028849 kernel: Guest personality initialized and is inactive Aug 13 00:16:06.028863 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:16:06.028878 kernel: Initialized host personality Aug 13 00:16:06.028893 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:16:06.028923 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:16:06.028943 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:16:06.028985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:16:06.029004 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:16:06.029020 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:16:06.029036 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:16:06.029061 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:16:06.029077 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:16:06.029102 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:16:06.029118 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:16:06.029134 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:16:06.029150 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:16:06.029168 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:16:06.029183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:16:06.029200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:16:06.029216 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:16:06.029234 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:16:06.029261 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:16:06.029279 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:16:06.029296 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:16:06.029313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:16:06.029329 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:16:06.029346 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:16:06.029362 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:16:06.029378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:16:06.029402 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:16:06.029419 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:16:06.029435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:16:06.029452 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:16:06.029469 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:16:06.029485 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:16:06.029500 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:16:06.029516 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:16:06.029533 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:16:06.029557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:16:06.029573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:16:06.029594 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:16:06.029609 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:16:06.029624 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:16:06.029640 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:16:06.029657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:06.029673 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:16:06.029688 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:16:06.029713 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:16:06.029729 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:16:06.029745 systemd[1]: Reached target machines.target - Containers. Aug 13 00:16:06.029761 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:16:06.029777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:16:06.029792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:16:06.029808 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:16:06.029823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:16:06.029847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:16:06.029864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:16:06.029880 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:16:06.029899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:16:06.029917 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:16:06.029934 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:16:06.029951 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:16:06.030008 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:16:06.030028 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:16:06.030067 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:16:06.030084 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:16:06.030101 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:16:06.030117 kernel: loop: module loaded Aug 13 00:16:06.030134 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:16:06.030158 kernel: fuse: init (API version 7.41) Aug 13 00:16:06.030175 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:16:06.030192 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:16:06.030209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:16:06.030226 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:16:06.030250 systemd[1]: Stopped verity-setup.service. Aug 13 00:16:06.030304 systemd-journald[1204]: Collecting audit messages is disabled. Aug 13 00:16:06.030372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:06.030404 systemd-journald[1204]: Journal started Aug 13 00:16:06.030466 systemd-journald[1204]: Runtime Journal (/run/log/journal/91081bba270148488e3ae561e0cfb4a0) is 6M, max 48.5M, 42.4M free. Aug 13 00:16:05.611957 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:16:05.624315 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:16:05.624834 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:16:06.032138 kernel: ACPI: bus type drm_connector registered Aug 13 00:16:06.038076 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:16:06.039433 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:16:06.040933 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:16:06.042277 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:16:06.043376 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:16:06.044990 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:16:06.046202 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:16:06.047480 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:16:06.048945 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:16:06.050477 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:16:06.050703 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:16:06.052199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:16:06.052480 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:16:06.053888 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:16:06.054290 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:16:06.055629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:16:06.055838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:16:06.057324 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:16:06.057602 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:16:06.058998 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:16:06.059294 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:16:06.060731 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:16:06.062209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:16:06.063873 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:16:06.065595 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:16:06.084729 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:16:06.087639 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:16:06.089947 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:16:06.091090 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:16:06.091123 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:16:06.093332 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:16:06.104123 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:16:06.105537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:16:06.108097 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:16:06.111188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:16:06.112365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:16:06.114113 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:16:06.116120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:16:06.118100 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:16:06.125546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:16:06.130105 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:16:06.133182 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:16:06.134580 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:16:06.143390 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:16:06.147592 systemd-journald[1204]: Time spent on flushing to /var/log/journal/91081bba270148488e3ae561e0cfb4a0 is 16.705ms for 1075 entries. Aug 13 00:16:06.147592 systemd-journald[1204]: System Journal (/var/log/journal/91081bba270148488e3ae561e0cfb4a0) is 8M, max 195.6M, 187.6M free. Aug 13 00:16:06.170821 systemd-journald[1204]: Received client request to flush runtime journal. Aug 13 00:16:06.170860 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 00:16:06.149898 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:16:06.152953 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:16:06.158446 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:16:06.172365 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:16:06.234064 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:16:06.239057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:16:06.246252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:16:06.256008 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 00:16:06.256922 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Aug 13 00:16:06.256939 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Aug 13 00:16:06.263097 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:16:06.266143 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:16:06.291010 kernel: loop2: detected capacity change from 0 to 113872 Aug 13 00:16:06.324459 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:16:06.327666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:16:06.330996 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 00:16:06.342009 kernel: loop4: detected capacity change from 0 to 229808 Aug 13 00:16:06.361274 kernel: loop5: detected capacity change from 0 to 113872 Aug 13 00:16:06.407176 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Aug 13 00:16:06.407577 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Aug 13 00:16:06.415420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:16:06.463414 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 00:16:06.464046 (sd-merge)[1278]: Merged extensions into '/usr'. Aug 13 00:16:06.471581 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:16:06.471630 systemd[1]: Reloading... Aug 13 00:16:06.533012 zram_generator::config[1303]: No configuration found. Aug 13 00:16:06.683649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:16:06.793358 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:16:06.794354 systemd[1]: Reloading finished in 322 ms. Aug 13 00:16:06.799884 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:16:06.835886 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:16:06.839759 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:16:06.884046 systemd[1]: Starting ensure-sysext.service... Aug 13 00:16:06.886498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:16:06.907001 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:16:06.907038 systemd[1]: Reloading... Aug 13 00:16:07.033030 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:16:07.033075 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:16:07.033406 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:16:07.033675 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:16:07.034746 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:16:07.037086 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Aug 13 00:16:07.037165 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Aug 13 00:16:07.041696 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:16:07.041798 systemd-tmpfiles[1344]: Skipping /boot Aug 13 00:16:07.047088 zram_generator::config[1371]: No configuration found. Aug 13 00:16:07.057229 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:16:07.057385 systemd-tmpfiles[1344]: Skipping /boot Aug 13 00:16:07.172836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:16:07.260930 systemd[1]: Reloading finished in 353 ms. Aug 13 00:16:07.287445 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:16:07.314043 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:16:07.324633 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:16:07.328040 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:16:07.331533 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:16:07.341177 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:16:07.346693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:16:07.350990 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:16:07.356276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:07.356524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:16:07.359363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:16:07.362458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:16:07.366596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:16:07.368158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:16:07.368344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:16:07.375304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:16:07.376618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:07.378469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:16:07.378784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:16:07.381689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:16:07.386528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:16:07.386813 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:16:07.390642 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:16:07.391367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:16:07.400339 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Aug 13 00:16:07.405711 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:07.406162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:16:07.408649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:16:07.411866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:16:07.416443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:16:07.417853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:16:07.418060 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:16:07.420117 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:16:07.423043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:07.424519 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:16:07.426650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:16:07.426902 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:16:07.437428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:07.437667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:16:07.448538 augenrules[1449]: No rules Aug 13 00:16:07.452112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:16:07.457135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:16:07.459698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:16:07.461217 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:16:07.461416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:16:07.464901 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:16:07.466553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:16:07.470290 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:16:07.472201 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:16:07.474250 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:16:07.476305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:16:07.476834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:16:07.478565 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:16:07.480277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:16:07.495363 systemd[1]: Finished ensure-sysext.service. Aug 13 00:16:07.497098 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:16:07.498877 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:16:07.499203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:16:07.500704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:16:07.500914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:16:07.522137 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:16:07.523902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:16:07.524008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:16:07.527126 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:16:07.528324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:16:07.580733 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:16:07.622033 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:16:07.627332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:16:07.630248 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:16:07.642017 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:16:07.649284 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:16:07.669332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:16:07.690876 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 00:16:07.691264 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:16:07.691427 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:16:07.758140 systemd-resolved[1413]: Positive Trust Anchors: Aug 13 00:16:07.758171 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:16:07.758213 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:16:07.764285 systemd-networkd[1496]: lo: Link UP Aug 13 00:16:07.765019 systemd-networkd[1496]: lo: Gained carrier Aug 13 00:16:07.769525 systemd-networkd[1496]: Enumeration completed Aug 13 00:16:07.770117 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:16:07.774389 systemd-resolved[1413]: Defaulting to hostname 'linux'. Aug 13 00:16:07.774603 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:16:07.775589 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:16:07.775598 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:16:07.778318 systemd-networkd[1496]: eth0: Link UP Aug 13 00:16:07.778818 systemd-networkd[1496]: eth0: Gained carrier Aug 13 00:16:07.779156 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:16:07.791772 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:16:07.795228 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:16:07.803713 systemd[1]: Reached target network.target - Network. Aug 13 00:16:07.805067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:16:07.839301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:16:07.847073 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:16:07.847869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:16:07.850058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:16:07.854432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:16:07.868766 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:16:09.089434 systemd-resolved[1413]: Clock change detected. Flushing caches. Aug 13 00:16:09.089827 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:16:09.089895 systemd-timesyncd[1497]: Initial clock synchronization to Wed 2025-08-13 00:16:09.089348 UTC. Aug 13 00:16:09.090305 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:16:09.095910 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:16:09.137187 kernel: kvm_amd: TSC scaling supported Aug 13 00:16:09.137303 kernel: kvm_amd: Nested Virtualization enabled Aug 13 00:16:09.137322 kernel: kvm_amd: Nested Paging enabled Aug 13 00:16:09.138159 kernel: kvm_amd: LBR virtualization supported Aug 13 00:16:09.138187 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 00:16:09.139726 kernel: kvm_amd: Virtual GIF supported Aug 13 00:16:09.166468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:16:09.168718 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:16:09.170082 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:16:09.171487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:16:09.172789 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:16:09.174294 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:16:09.175603 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:16:09.176986 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:16:09.178335 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:16:09.178383 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:16:09.179407 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:16:09.185144 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:16:09.190608 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:16:09.201984 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:16:09.203895 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:16:09.205209 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:16:09.207727 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:16:09.216794 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:16:09.219086 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:16:09.222112 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:16:09.225627 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:16:09.227885 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:16:09.230296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:16:09.230342 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:16:09.232189 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:16:09.234638 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:16:09.236885 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:16:09.244982 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:16:09.248181 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:16:09.249267 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:16:09.250416 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:16:09.255828 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:16:09.256224 jq[1546]: false Aug 13 00:16:09.258643 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:16:09.260969 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:16:09.263139 extend-filesystems[1547]: Found /dev/vda6 Aug 13 00:16:09.264465 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:16:09.269094 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing passwd entry cache Aug 13 00:16:09.269108 oslogin_cache_refresh[1548]: Refreshing passwd entry cache Aug 13 00:16:09.272207 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:16:09.272796 extend-filesystems[1547]: Found /dev/vda9 Aug 13 00:16:09.275409 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:16:09.276274 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:16:09.277784 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:16:09.284735 extend-filesystems[1547]: Checking size of /dev/vda9 Aug 13 00:16:09.282882 oslogin_cache_refresh[1548]: Failure getting users, quitting Aug 13 00:16:09.286114 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting users, quitting Aug 13 00:16:09.286114 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:16:09.286114 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing group entry cache Aug 13 00:16:09.281717 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:16:09.282915 oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:16:09.282987 oslogin_cache_refresh[1548]: Refreshing group entry cache Aug 13 00:16:09.286754 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:16:09.288718 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:16:09.290830 oslogin_cache_refresh[1548]: Failure getting groups, quitting Aug 13 00:16:09.288963 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:16:09.290952 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting groups, quitting Aug 13 00:16:09.290952 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:16:09.290847 oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:16:09.291006 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:16:09.291318 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:16:09.292915 jq[1562]: true Aug 13 00:16:09.300322 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:16:09.303266 extend-filesystems[1547]: Resized partition /dev/vda9 Aug 13 00:16:09.306341 update_engine[1561]: I20250813 00:16:09.306254 1561 main.cc:92] Flatcar Update Engine starting Aug 13 00:16:09.306978 extend-filesystems[1578]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:16:09.307637 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:16:09.311109 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:16:09.311402 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:16:09.312476 jq[1573]: true Aug 13 00:16:09.318094 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:16:09.329745 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:16:09.340414 tar[1569]: linux-amd64/LICENSE Aug 13 00:16:09.341770 tar[1569]: linux-amd64/helm Aug 13 00:16:09.348696 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:16:09.364447 dbus-daemon[1544]: [system] SELinux support is enabled Aug 13 00:16:09.365018 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:16:09.411915 update_engine[1561]: I20250813 00:16:09.380847 1561 update_check_scheduler.cc:74] Next update check in 9m41s Aug 13 00:16:09.411957 extend-filesystems[1578]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:16:09.411957 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:16:09.411957 extend-filesystems[1578]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:16:09.371223 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:16:09.418837 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Aug 13 00:16:09.371247 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:16:09.372809 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:16:09.372844 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:16:09.380697 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:16:09.384254 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:16:09.414477 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:16:09.414808 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:16:09.460255 bash[1606]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:16:09.500027 systemd-logind[1557]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:16:09.500068 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:16:09.506738 systemd-logind[1557]: New seat seat0. Aug 13 00:16:09.619955 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:16:09.623349 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:16:09.631933 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:16:09.717484 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:16:09.782017 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:16:09.851325 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:16:09.855970 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:16:09.903634 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:16:09.904167 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:16:09.908075 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:16:09.978950 containerd[1579]: time="2025-08-13T00:16:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:16:09.981045 containerd[1579]: time="2025-08-13T00:16:09.980997297Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995020220Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.942µs" Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995057359Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995079911Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995270018Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995283994Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995314581Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995391275Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:16:09.995718 containerd[1579]: time="2025-08-13T00:16:09.995416272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:16:09.995995 containerd[1579]: time="2025-08-13T00:16:09.995974890Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:16:09.996047 containerd[1579]: time="2025-08-13T00:16:09.996034561Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:16:09.996105 containerd[1579]: time="2025-08-13T00:16:09.996084635Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:16:09.996193 containerd[1579]: time="2025-08-13T00:16:09.996172390Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:16:09.996410 containerd[1579]: time="2025-08-13T00:16:09.996378586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:16:09.996778 containerd[1579]: time="2025-08-13T00:16:09.996754752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:16:09.996891 containerd[1579]: time="2025-08-13T00:16:09.996873915Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:16:09.996960 containerd[1579]: time="2025-08-13T00:16:09.996946271Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:16:09.997035 containerd[1579]: time="2025-08-13T00:16:09.997022594Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:16:09.997646 containerd[1579]: time="2025-08-13T00:16:09.997574880Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:16:09.997870 containerd[1579]: time="2025-08-13T00:16:09.997838103Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:16:10.004536 containerd[1579]: time="2025-08-13T00:16:10.004458897Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:16:10.004582 containerd[1579]: time="2025-08-13T00:16:10.004561700Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:16:10.004604 containerd[1579]: time="2025-08-13T00:16:10.004584493Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:16:10.004624 containerd[1579]: time="2025-08-13T00:16:10.004607045Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:16:10.004644 containerd[1579]: time="2025-08-13T00:16:10.004629277Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:16:10.004691 containerd[1579]: time="2025-08-13T00:16:10.004647521Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:16:10.004712 containerd[1579]: time="2025-08-13T00:16:10.004690992Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:16:10.004748 containerd[1579]: time="2025-08-13T00:16:10.004710038Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:16:10.004748 containerd[1579]: time="2025-08-13T00:16:10.004727070Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:16:10.004748 containerd[1579]: time="2025-08-13T00:16:10.004740555Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:16:10.004803 containerd[1579]: time="2025-08-13T00:16:10.004754531Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:16:10.004803 containerd[1579]: time="2025-08-13T00:16:10.004772064Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:16:10.005015 containerd[1579]: time="2025-08-13T00:16:10.004991856Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:16:10.005047 containerd[1579]: time="2025-08-13T00:16:10.005028846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:16:10.005068 containerd[1579]: time="2025-08-13T00:16:10.005049555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:16:10.005087 containerd[1579]: time="2025-08-13T00:16:10.005066025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:16:10.005202 containerd[1579]: time="2025-08-13T00:16:10.005141437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:16:10.005233 containerd[1579]: time="2025-08-13T00:16:10.005210627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:16:10.005280 containerd[1579]: time="2025-08-13T00:16:10.005233109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:16:10.005280 containerd[1579]: time="2025-08-13T00:16:10.005263466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:16:10.005341 containerd[1579]: time="2025-08-13T00:16:10.005290476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:16:10.005341 containerd[1579]: time="2025-08-13T00:16:10.005325041Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:16:10.005379 containerd[1579]: time="2025-08-13T00:16:10.005351551Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:16:10.005481 containerd[1579]: time="2025-08-13T00:16:10.005458822Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:16:10.005505 containerd[1579]: time="2025-08-13T00:16:10.005486424Z" level=info msg="Start snapshots syncer" Aug 13 00:16:10.005526 containerd[1579]: time="2025-08-13T00:16:10.005516320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:16:10.005946 containerd[1579]: time="2025-08-13T00:16:10.005875794Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:16:10.006068 containerd[1579]: time="2025-08-13T00:16:10.005960603Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:16:10.006143 containerd[1579]: time="2025-08-13T00:16:10.006094494Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:16:10.006336 containerd[1579]: time="2025-08-13T00:16:10.006253101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:16:10.006336 containerd[1579]: time="2025-08-13T00:16:10.006288598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:16:10.006336 containerd[1579]: time="2025-08-13T00:16:10.006312352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:16:10.006462 containerd[1579]: time="2025-08-13T00:16:10.006336928Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:16:10.006462 containerd[1579]: time="2025-08-13T00:16:10.006382514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:16:10.006462 containerd[1579]: time="2025-08-13T00:16:10.006427288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:16:10.006462 containerd[1579]: time="2025-08-13T00:16:10.006456883Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:16:10.006595 containerd[1579]: time="2025-08-13T00:16:10.006524049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:16:10.006595 containerd[1579]: time="2025-08-13T00:16:10.006560357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:16:10.006595 containerd[1579]: time="2025-08-13T00:16:10.006589462Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:16:10.035182 containerd[1579]: time="2025-08-13T00:16:10.034870349Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:16:10.035321 containerd[1579]: time="2025-08-13T00:16:10.035259127Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:16:10.035321 containerd[1579]: time="2025-08-13T00:16:10.035279075Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:16:10.035321 containerd[1579]: time="2025-08-13T00:16:10.035297309Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:16:10.035321 containerd[1579]: time="2025-08-13T00:16:10.035317828Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:16:10.035503 containerd[1579]: time="2025-08-13T00:16:10.035345349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:16:10.035503 containerd[1579]: time="2025-08-13T00:16:10.035378822Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:16:10.035503 containerd[1579]: time="2025-08-13T00:16:10.035426912Z" level=info msg="runtime interface created" Aug 13 00:16:10.035503 containerd[1579]: time="2025-08-13T00:16:10.035453121Z" level=info msg="created NRI interface" Aug 13 00:16:10.035503 containerd[1579]: time="2025-08-13T00:16:10.035493287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:16:10.035768 containerd[1579]: time="2025-08-13T00:16:10.035556134Z" level=info msg="Connect containerd service" Aug 13 00:16:10.035873 containerd[1579]: time="2025-08-13T00:16:10.035822163Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:16:10.038842 containerd[1579]: time="2025-08-13T00:16:10.038753369Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:16:10.111906 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:16:10.120881 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:16:10.125385 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:16:10.127308 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:16:10.333371 containerd[1579]: time="2025-08-13T00:16:10.333085031Z" level=info msg="Start subscribing containerd event" Aug 13 00:16:10.333371 containerd[1579]: time="2025-08-13T00:16:10.333301908Z" level=info msg="Start recovering state" Aug 13 00:16:10.333755 containerd[1579]: time="2025-08-13T00:16:10.333581572Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:16:10.333755 containerd[1579]: time="2025-08-13T00:16:10.333696377Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:16:10.334593 containerd[1579]: time="2025-08-13T00:16:10.334570747Z" level=info msg="Start event monitor" Aug 13 00:16:10.334880 containerd[1579]: time="2025-08-13T00:16:10.334620640Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:16:10.334880 containerd[1579]: time="2025-08-13T00:16:10.334634156Z" level=info msg="Start streaming server" Aug 13 00:16:10.334880 containerd[1579]: time="2025-08-13T00:16:10.334705219Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:16:10.334880 containerd[1579]: time="2025-08-13T00:16:10.334744322Z" level=info msg="runtime interface starting up..." Aug 13 00:16:10.334880 containerd[1579]: time="2025-08-13T00:16:10.334760172Z" level=info msg="starting plugins..." Aug 13 00:16:10.334880 containerd[1579]: time="2025-08-13T00:16:10.334805457Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:16:10.335340 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:16:10.335540 containerd[1579]: time="2025-08-13T00:16:10.335485692Z" level=info msg="containerd successfully booted in 0.359896s" Aug 13 00:16:10.412915 tar[1569]: linux-amd64/README.md Aug 13 00:16:10.455717 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:16:10.621892 systemd-networkd[1496]: eth0: Gained IPv6LL Aug 13 00:16:10.625604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:16:10.627610 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:16:10.630814 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 00:16:10.633502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:10.636256 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:16:10.665935 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:16:10.667758 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 00:16:10.668023 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 00:16:10.670542 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:16:12.840944 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:16:12.845895 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). Aug 13 00:16:12.922842 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:16:12.925250 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:16:12.940327 systemd-logind[1557]: New session 1 of user core. Aug 13 00:16:12.941898 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:16:12.944669 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:16:12.987779 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:16:12.993375 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:16:13.014031 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:13.017170 systemd-logind[1557]: New session c1 of user core. Aug 13 00:16:13.058187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:13.060387 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:16:13.134269 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:16:13.241207 systemd[1679]: Queued start job for default target default.target. Aug 13 00:16:13.310822 systemd[1679]: Created slice app.slice - User Application Slice. Aug 13 00:16:13.310867 systemd[1679]: Reached target paths.target - Paths. Aug 13 00:16:13.310929 systemd[1679]: Reached target timers.target - Timers. Aug 13 00:16:13.313184 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:16:13.328901 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:16:13.329098 systemd[1679]: Reached target sockets.target - Sockets. Aug 13 00:16:13.329371 systemd[1679]: Reached target basic.target - Basic System. Aug 13 00:16:13.329458 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:16:13.329625 systemd[1679]: Reached target default.target - Main User Target. Aug 13 00:16:13.329681 systemd[1679]: Startup finished in 302ms. Aug 13 00:16:13.341898 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:16:13.343643 systemd[1]: Startup finished in 2.945s (kernel) + 6.435s (initrd) + 7.104s (userspace) = 16.485s. Aug 13 00:16:13.441685 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:54766.service - OpenSSH per-connection server daemon (10.0.0.1:54766). Aug 13 00:16:13.511145 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 54766 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:16:13.512821 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:16:13.517571 systemd-logind[1557]: New session 2 of user core. Aug 13 00:16:13.531900 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:16:13.587279 sshd[1707]: Connection closed by 10.0.0.1 port 54766 Aug 13 00:16:13.587602 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:13.601583 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:54766.service: Deactivated successfully. Aug 13 00:16:13.605087 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:16:13.606236 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:16:13.612925 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:54770.service - OpenSSH per-connection server daemon (10.0.0.1:54770). Aug 13 00:16:13.614337 systemd-logind[1557]: Removed session 2. Aug 13 00:16:13.687307 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 54770 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:16:13.689704 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:16:13.695166 systemd-logind[1557]: New session 3 of user core. Aug 13 00:16:13.711905 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:16:13.764067 sshd[1716]: Connection closed by 10.0.0.1 port 54770 Aug 13 00:16:13.764349 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:13.776929 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:54770.service: Deactivated successfully. Aug 13 00:16:13.778798 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:16:13.779589 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:16:13.782954 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:54780.service - OpenSSH per-connection server daemon (10.0.0.1:54780). Aug 13 00:16:13.784251 systemd-logind[1557]: Removed session 3. Aug 13 00:16:13.861978 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 54780 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:16:13.863349 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:16:13.867745 systemd-logind[1557]: New session 4 of user core. Aug 13 00:16:13.877816 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:16:13.935507 sshd[1724]: Connection closed by 10.0.0.1 port 54780 Aug 13 00:16:13.936091 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:13.945495 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:54780.service: Deactivated successfully. Aug 13 00:16:13.947375 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:16:13.948615 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:16:13.951492 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:54796.service - OpenSSH per-connection server daemon (10.0.0.1:54796). Aug 13 00:16:13.952274 systemd-logind[1557]: Removed session 4. Aug 13 00:16:13.993084 kubelet[1688]: E0813 00:16:13.993004 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:16:13.997289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:16:13.997490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:16:13.997937 systemd[1]: kubelet.service: Consumed 3.092s CPU time, 264.2M memory peak. Aug 13 00:16:14.002231 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 54796 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:16:14.003835 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:16:14.008421 systemd-logind[1557]: New session 5 of user core. Aug 13 00:16:14.015779 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:16:14.075135 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:16:14.075440 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:16:14.097503 sudo[1734]: pam_unix(sudo:session): session closed for user root Aug 13 00:16:14.099321 sshd[1733]: Connection closed by 10.0.0.1 port 54796 Aug 13 00:16:14.099689 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:14.113495 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:54796.service: Deactivated successfully. Aug 13 00:16:14.115332 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:16:14.116142 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:16:14.119198 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:54806.service - OpenSSH per-connection server daemon (10.0.0.1:54806). Aug 13 00:16:14.119991 systemd-logind[1557]: Removed session 5. Aug 13 00:16:14.173609 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 54806 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:16:14.175486 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:16:14.180496 systemd-logind[1557]: New session 6 of user core. Aug 13 00:16:14.191811 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:16:14.245229 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:16:14.245597 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:16:14.285572 sudo[1744]: pam_unix(sudo:session): session closed for user root Aug 13 00:16:14.294821 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:16:14.295311 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:16:14.306785 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:16:14.361096 augenrules[1766]: No rules Aug 13 00:16:14.363124 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:16:14.363430 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:16:14.364790 sudo[1743]: pam_unix(sudo:session): session closed for user root Aug 13 00:16:14.366510 sshd[1742]: Connection closed by 10.0.0.1 port 54806 Aug 13 00:16:14.366899 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:14.379042 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:54806.service: Deactivated successfully. Aug 13 00:16:14.381226 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:16:14.382019 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:16:14.385135 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:54814.service - OpenSSH per-connection server daemon (10.0.0.1:54814). Aug 13 00:16:14.385916 systemd-logind[1557]: Removed session 6. Aug 13 00:16:14.438762 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 54814 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:16:14.440206 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:16:14.444978 systemd-logind[1557]: New session 7 of user core. Aug 13 00:16:14.455922 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:16:14.511124 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:16:14.511503 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:16:15.199996 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:16:15.233756 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:16:15.800295 dockerd[1799]: time="2025-08-13T00:16:15.800188178Z" level=info msg="Starting up" Aug 13 00:16:15.801644 dockerd[1799]: time="2025-08-13T00:16:15.801612068Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:16:16.990195 dockerd[1799]: time="2025-08-13T00:16:16.990111604Z" level=info msg="Loading containers: start." Aug 13 00:16:17.012708 kernel: Initializing XFRM netlink socket Aug 13 00:16:17.327224 systemd-networkd[1496]: docker0: Link UP Aug 13 00:16:17.333073 dockerd[1799]: time="2025-08-13T00:16:17.333017833Z" level=info msg="Loading containers: done." Aug 13 00:16:17.355836 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2421081446-merged.mount: Deactivated successfully. Aug 13 00:16:17.357426 dockerd[1799]: time="2025-08-13T00:16:17.357371626Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:16:17.357524 dockerd[1799]: time="2025-08-13T00:16:17.357505678Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:16:17.357697 dockerd[1799]: time="2025-08-13T00:16:17.357673653Z" level=info msg="Initializing buildkit" Aug 13 00:16:17.394297 dockerd[1799]: time="2025-08-13T00:16:17.394228734Z" level=info msg="Completed buildkit initialization" Aug 13 00:16:17.402475 dockerd[1799]: time="2025-08-13T00:16:17.402400796Z" level=info msg="Daemon has completed initialization" Aug 13 00:16:17.402642 dockerd[1799]: time="2025-08-13T00:16:17.402511744Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:16:17.402710 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:16:18.170187 containerd[1579]: time="2025-08-13T00:16:18.170136761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:16:19.021006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127096311.mount: Deactivated successfully. Aug 13 00:16:21.667874 containerd[1579]: time="2025-08-13T00:16:21.667761086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:21.756968 containerd[1579]: time="2025-08-13T00:16:21.756878300Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 00:16:21.820798 containerd[1579]: time="2025-08-13T00:16:21.820717208Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:21.967865 containerd[1579]: time="2025-08-13T00:16:21.967683841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:21.969018 containerd[1579]: time="2025-08-13T00:16:21.968984700Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 3.798791002s" Aug 13 00:16:21.969085 containerd[1579]: time="2025-08-13T00:16:21.969027971Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 00:16:21.970286 containerd[1579]: time="2025-08-13T00:16:21.970247788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:16:24.090891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:16:24.092474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:24.361999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:24.383158 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:16:24.811601 kubelet[2072]: E0813 00:16:24.811440 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:16:24.817559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:16:24.817832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:16:24.818347 systemd[1]: kubelet.service: Consumed 312ms CPU time, 111.8M memory peak. Aug 13 00:16:27.847474 containerd[1579]: time="2025-08-13T00:16:27.847409505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:27.848328 containerd[1579]: time="2025-08-13T00:16:27.848296568Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 00:16:27.849543 containerd[1579]: time="2025-08-13T00:16:27.849507689Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:27.852170 containerd[1579]: time="2025-08-13T00:16:27.852120588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:27.853161 containerd[1579]: time="2025-08-13T00:16:27.853009845Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 5.882717163s" Aug 13 00:16:27.853220 containerd[1579]: time="2025-08-13T00:16:27.853170567Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 00:16:27.853731 containerd[1579]: time="2025-08-13T00:16:27.853708415Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:16:34.840977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:16:34.842737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:35.052790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:35.056991 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:16:35.113317 kubelet[2096]: E0813 00:16:35.113166 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:16:35.116616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:16:35.116830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:16:35.117238 systemd[1]: kubelet.service: Consumed 235ms CPU time, 108.9M memory peak. Aug 13 00:16:35.319212 containerd[1579]: time="2025-08-13T00:16:35.319144396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:35.320028 containerd[1579]: time="2025-08-13T00:16:35.320007143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 00:16:35.321458 containerd[1579]: time="2025-08-13T00:16:35.321433287Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:35.323909 containerd[1579]: time="2025-08-13T00:16:35.323886597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:35.325052 containerd[1579]: time="2025-08-13T00:16:35.325006327Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 7.471262806s" Aug 13 00:16:35.325052 containerd[1579]: time="2025-08-13T00:16:35.325051391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 00:16:35.325609 containerd[1579]: time="2025-08-13T00:16:35.325580734Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:16:36.450276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864243717.mount: Deactivated successfully. Aug 13 00:16:37.654620 containerd[1579]: time="2025-08-13T00:16:37.654534882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:37.655925 containerd[1579]: time="2025-08-13T00:16:37.655852162Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 00:16:37.657028 containerd[1579]: time="2025-08-13T00:16:37.656969457Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:37.659006 containerd[1579]: time="2025-08-13T00:16:37.658932578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:37.659840 containerd[1579]: time="2025-08-13T00:16:37.659773535Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.334164408s" Aug 13 00:16:37.659840 containerd[1579]: time="2025-08-13T00:16:37.659825432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 00:16:37.660448 containerd[1579]: time="2025-08-13T00:16:37.660402344Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:16:38.576705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516092561.mount: Deactivated successfully. Aug 13 00:16:39.722764 containerd[1579]: time="2025-08-13T00:16:39.722680184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:39.724690 containerd[1579]: time="2025-08-13T00:16:39.724173334Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 00:16:39.726698 containerd[1579]: time="2025-08-13T00:16:39.726602258Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:39.730853 containerd[1579]: time="2025-08-13T00:16:39.730774381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:39.731876 containerd[1579]: time="2025-08-13T00:16:39.731797669Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.071349219s" Aug 13 00:16:39.731876 containerd[1579]: time="2025-08-13T00:16:39.731863403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 00:16:39.732542 containerd[1579]: time="2025-08-13T00:16:39.732477654Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:16:40.551718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401448181.mount: Deactivated successfully. Aug 13 00:16:40.577641 containerd[1579]: time="2025-08-13T00:16:40.577561396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:40.579627 containerd[1579]: time="2025-08-13T00:16:40.579423027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:16:40.580955 containerd[1579]: time="2025-08-13T00:16:40.580874839Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:40.594546 containerd[1579]: time="2025-08-13T00:16:40.593057652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:40.594546 containerd[1579]: time="2025-08-13T00:16:40.594136885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 861.587707ms" Aug 13 00:16:40.594546 containerd[1579]: time="2025-08-13T00:16:40.594185286Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:16:40.595783 containerd[1579]: time="2025-08-13T00:16:40.595696099Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:16:41.610122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570627965.mount: Deactivated successfully. Aug 13 00:16:45.341037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:16:45.343699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:45.551305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:45.566009 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:16:45.634587 kubelet[2229]: E0813 00:16:45.634423 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:16:45.640240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:16:45.640527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:16:45.641326 systemd[1]: kubelet.service: Consumed 256ms CPU time, 110.1M memory peak. Aug 13 00:16:48.608602 containerd[1579]: time="2025-08-13T00:16:48.608536721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:48.671845 containerd[1579]: time="2025-08-13T00:16:48.671780009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 00:16:48.741345 containerd[1579]: time="2025-08-13T00:16:48.741265161Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:48.824950 containerd[1579]: time="2025-08-13T00:16:48.824893890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:48.825946 containerd[1579]: time="2025-08-13T00:16:48.825902107Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.230159951s" Aug 13 00:16:48.825946 containerd[1579]: time="2025-08-13T00:16:48.825942504Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 00:16:51.815578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:51.815756 systemd[1]: kubelet.service: Consumed 256ms CPU time, 110.1M memory peak. Aug 13 00:16:51.817957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:51.845789 systemd[1]: Reload requested from client PID 2271 ('systemctl') (unit session-7.scope)... Aug 13 00:16:51.845805 systemd[1]: Reloading... Aug 13 00:16:51.942005 zram_generator::config[2318]: No configuration found. Aug 13 00:16:52.380780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:16:52.515914 systemd[1]: Reloading finished in 669 ms. Aug 13 00:16:52.586336 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:16:52.586434 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:16:52.586737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:52.586788 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.2M memory peak. Aug 13 00:16:52.588412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:52.796148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:52.800213 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:16:52.839871 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:52.839871 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:16:52.839871 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:52.839871 kubelet[2363]: I0813 00:16:52.839555 2363 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:16:53.886619 kubelet[2363]: I0813 00:16:53.886561 2363 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:16:53.886619 kubelet[2363]: I0813 00:16:53.886594 2363 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:16:53.887208 kubelet[2363]: I0813 00:16:53.886901 2363 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:16:53.913744 kubelet[2363]: I0813 00:16:53.913699 2363 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:16:53.914394 kubelet[2363]: E0813 00:16:53.914357 2363 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:16:53.923941 kubelet[2363]: I0813 00:16:53.923907 2363 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:16:53.929919 kubelet[2363]: I0813 00:16:53.929868 2363 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:16:53.930234 kubelet[2363]: I0813 00:16:53.930181 2363 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:16:53.930436 kubelet[2363]: I0813 00:16:53.930211 2363 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:16:53.930571 kubelet[2363]: I0813 00:16:53.930446 2363 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:16:53.930571 kubelet[2363]: I0813 00:16:53.930459 2363 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:16:53.930707 kubelet[2363]: I0813 00:16:53.930684 2363 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:53.934722 kubelet[2363]: I0813 00:16:53.934684 2363 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:16:53.934791 kubelet[2363]: I0813 00:16:53.934732 2363 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:16:53.934831 kubelet[2363]: I0813 00:16:53.934796 2363 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:16:53.934831 kubelet[2363]: I0813 00:16:53.934823 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:16:53.941471 kubelet[2363]: I0813 00:16:53.941197 2363 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:16:53.941471 kubelet[2363]: E0813 00:16:53.941303 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:16:53.941471 kubelet[2363]: E0813 00:16:53.941387 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:16:53.941841 kubelet[2363]: I0813 00:16:53.941808 2363 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:16:53.942468 kubelet[2363]: W0813 00:16:53.942437 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:16:53.945925 kubelet[2363]: I0813 00:16:53.945893 2363 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:16:53.945992 kubelet[2363]: I0813 00:16:53.945944 2363 server.go:1289] "Started kubelet" Aug 13 00:16:53.948332 kubelet[2363]: I0813 00:16:53.947667 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:16:53.950272 kubelet[2363]: I0813 00:16:53.949561 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:16:53.950272 kubelet[2363]: I0813 00:16:53.949587 2363 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:16:53.950647 kubelet[2363]: I0813 00:16:53.950624 2363 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:16:53.952041 kubelet[2363]: I0813 00:16:53.951368 2363 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:16:53.952041 kubelet[2363]: I0813 00:16:53.951468 2363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:16:53.952041 kubelet[2363]: E0813 00:16:53.951849 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:16:53.952041 kubelet[2363]: E0813 00:16:53.951939 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Aug 13 00:16:53.955564 kubelet[2363]: I0813 00:16:53.949563 2363 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:16:53.955696 kubelet[2363]: E0813 00:16:53.951886 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2b69f959d64a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:16:53.945914954 +0000 UTC m=+1.141766990,LastTimestamp:2025-08-13 00:16:53.945914954 +0000 UTC m=+1.141766990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:16:53.956362 kubelet[2363]: I0813 00:16:53.956342 2363 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:16:53.956449 kubelet[2363]: I0813 00:16:53.956427 2363 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:16:53.956671 kubelet[2363]: E0813 00:16:53.956635 2363 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:16:53.957400 kubelet[2363]: I0813 00:16:53.957361 2363 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:16:53.957899 kubelet[2363]: I0813 00:16:53.957881 2363 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:16:53.957996 kubelet[2363]: I0813 00:16:53.957975 2363 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:16:53.958372 kubelet[2363]: E0813 00:16:53.958339 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:16:53.968913 kubelet[2363]: I0813 00:16:53.968882 2363 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:16:53.968913 kubelet[2363]: I0813 00:16:53.968900 2363 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:16:53.968913 kubelet[2363]: I0813 00:16:53.968918 2363 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:53.972269 kubelet[2363]: I0813 00:16:53.972231 2363 policy_none.go:49] "None policy: Start" Aug 13 00:16:53.972269 kubelet[2363]: I0813 00:16:53.972262 2363 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:16:53.972350 kubelet[2363]: I0813 00:16:53.972284 2363 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:16:53.978745 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:16:53.980098 kubelet[2363]: I0813 00:16:53.980041 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:16:53.981943 kubelet[2363]: I0813 00:16:53.981853 2363 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:16:53.981943 kubelet[2363]: I0813 00:16:53.981908 2363 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:16:53.982301 kubelet[2363]: I0813 00:16:53.982274 2363 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:16:53.982301 kubelet[2363]: I0813 00:16:53.982302 2363 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:16:53.982450 kubelet[2363]: E0813 00:16:53.982351 2363 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:16:53.984468 kubelet[2363]: E0813 00:16:53.984406 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:16:53.990856 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:16:53.994392 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:16:54.004543 kubelet[2363]: E0813 00:16:54.004515 2363 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:16:54.004826 kubelet[2363]: I0813 00:16:54.004801 2363 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:16:54.004888 kubelet[2363]: I0813 00:16:54.004821 2363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:16:54.005247 kubelet[2363]: I0813 00:16:54.005213 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:16:54.006043 kubelet[2363]: E0813 00:16:54.006012 2363 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:16:54.006104 kubelet[2363]: E0813 00:16:54.006084 2363 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:16:54.094348 systemd[1]: Created slice kubepods-burstable-pod98f966b8f92114dec57c74a489af7ff0.slice - libcontainer container kubepods-burstable-pod98f966b8f92114dec57c74a489af7ff0.slice. Aug 13 00:16:54.106219 kubelet[2363]: I0813 00:16:54.106165 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:16:54.106567 kubelet[2363]: E0813 00:16:54.106533 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Aug 13 00:16:54.112272 kubelet[2363]: E0813 00:16:54.112230 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:54.115497 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice - libcontainer container kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 13 00:16:54.123917 kubelet[2363]: E0813 00:16:54.123878 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:54.126791 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice - libcontainer container kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 13 00:16:54.128640 kubelet[2363]: E0813 00:16:54.128603 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:54.153162 kubelet[2363]: E0813 00:16:54.153066 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Aug 13 00:16:54.159528 kubelet[2363]: I0813 00:16:54.159487 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:54.159528 kubelet[2363]: I0813 00:16:54.159520 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:16:54.159620 kubelet[2363]: I0813 00:16:54.159537 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98f966b8f92114dec57c74a489af7ff0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"98f966b8f92114dec57c74a489af7ff0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:54.159620 kubelet[2363]: I0813 00:16:54.159563 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:54.159705 kubelet[2363]: I0813 00:16:54.159612 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:54.159705 kubelet[2363]: I0813 00:16:54.159692 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:54.159796 kubelet[2363]: I0813 00:16:54.159710 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98f966b8f92114dec57c74a489af7ff0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"98f966b8f92114dec57c74a489af7ff0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:54.159796 kubelet[2363]: I0813 00:16:54.159775 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98f966b8f92114dec57c74a489af7ff0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"98f966b8f92114dec57c74a489af7ff0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:54.159856 kubelet[2363]: I0813 00:16:54.159801 2363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:54.308544 kubelet[2363]: I0813 00:16:54.308497 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:16:54.308971 kubelet[2363]: E0813 00:16:54.308939 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Aug 13 00:16:54.413374 kubelet[2363]: E0813 00:16:54.413248 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:54.414057 containerd[1579]: time="2025-08-13T00:16:54.414017446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:98f966b8f92114dec57c74a489af7ff0,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:54.425363 kubelet[2363]: E0813 00:16:54.425338 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:54.425971 containerd[1579]: time="2025-08-13T00:16:54.425919813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:54.429320 kubelet[2363]: E0813 00:16:54.429278 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:54.429649 containerd[1579]: time="2025-08-13T00:16:54.429617781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:54.456898 containerd[1579]: time="2025-08-13T00:16:54.456813952Z" level=info msg="connecting to shim be1a54e00970d33eb892b1e002fa7cd43a625f56ad9bd4a27741d874f2c2b798" address="unix:///run/containerd/s/818a32c6f5bed0f434c7df742461b9124f95bd00b352835998ca08aff203cbd0" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:16:54.471003 containerd[1579]: time="2025-08-13T00:16:54.470923696Z" level=info msg="connecting to shim 586b76001550dbe67de9565671545c83f4dd9b1517f04929d9d809dbe6a09360" address="unix:///run/containerd/s/9950847e34f2f0582d9546aec8c53b7c70d92dd854c710302cb7fabff055153b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:16:54.491771 containerd[1579]: time="2025-08-13T00:16:54.491685579Z" level=info msg="connecting to shim b1809508c39d8bb9437dea53e8ce0a7f70d529a37747ad9a6596617cc3002aff" address="unix:///run/containerd/s/0269465aa285ed445c35d790ac602617035501443fa139305cd190be6a3fc07c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:16:54.515165 systemd[1]: Started cri-containerd-be1a54e00970d33eb892b1e002fa7cd43a625f56ad9bd4a27741d874f2c2b798.scope - libcontainer container be1a54e00970d33eb892b1e002fa7cd43a625f56ad9bd4a27741d874f2c2b798. Aug 13 00:16:54.553844 kubelet[2363]: E0813 00:16:54.553783 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Aug 13 00:16:54.619961 systemd[1]: Started cri-containerd-586b76001550dbe67de9565671545c83f4dd9b1517f04929d9d809dbe6a09360.scope - libcontainer container 586b76001550dbe67de9565671545c83f4dd9b1517f04929d9d809dbe6a09360. Aug 13 00:16:54.627242 systemd[1]: Started cri-containerd-b1809508c39d8bb9437dea53e8ce0a7f70d529a37747ad9a6596617cc3002aff.scope - libcontainer container b1809508c39d8bb9437dea53e8ce0a7f70d529a37747ad9a6596617cc3002aff. Aug 13 00:16:54.677574 containerd[1579]: time="2025-08-13T00:16:54.677457836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:98f966b8f92114dec57c74a489af7ff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"be1a54e00970d33eb892b1e002fa7cd43a625f56ad9bd4a27741d874f2c2b798\"" Aug 13 00:16:54.685513 kubelet[2363]: E0813 00:16:54.685445 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:54.687236 containerd[1579]: time="2025-08-13T00:16:54.687180407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"586b76001550dbe67de9565671545c83f4dd9b1517f04929d9d809dbe6a09360\"" Aug 13 00:16:54.687935 kubelet[2363]: E0813 00:16:54.687913 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:54.694855 containerd[1579]: time="2025-08-13T00:16:54.694808244Z" level=info msg="CreateContainer within sandbox \"be1a54e00970d33eb892b1e002fa7cd43a625f56ad9bd4a27741d874f2c2b798\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:16:54.696861 containerd[1579]: time="2025-08-13T00:16:54.696834247Z" level=info msg="CreateContainer within sandbox \"586b76001550dbe67de9565671545c83f4dd9b1517f04929d9d809dbe6a09360\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:16:54.697644 containerd[1579]: time="2025-08-13T00:16:54.697614504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1809508c39d8bb9437dea53e8ce0a7f70d529a37747ad9a6596617cc3002aff\"" Aug 13 00:16:54.698194 kubelet[2363]: E0813 00:16:54.698155 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:54.702347 containerd[1579]: time="2025-08-13T00:16:54.702321292Z" level=info msg="CreateContainer within sandbox \"b1809508c39d8bb9437dea53e8ce0a7f70d529a37747ad9a6596617cc3002aff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:16:54.710993 kubelet[2363]: I0813 00:16:54.710966 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:16:54.712151 containerd[1579]: time="2025-08-13T00:16:54.711151654Z" level=info msg="Container 0ad1cdc10cc44abd10581d461cfcd9777cb92f598df8712cf736412fdf14e62d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:16:54.712224 kubelet[2363]: E0813 00:16:54.711294 2363 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Aug 13 00:16:54.714836 containerd[1579]: time="2025-08-13T00:16:54.714794159Z" level=info msg="Container 7b0c468d053347b9260e43aee0512bd1d3a463907cb2a0cc4e607b065e226b0d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:16:54.717739 containerd[1579]: time="2025-08-13T00:16:54.717711209Z" level=info msg="Container 2d413fdc8c663fad36476d199eeb766f40a9f185ba02092280307802bcf0769b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:16:54.724712 containerd[1579]: time="2025-08-13T00:16:54.724681541Z" level=info msg="CreateContainer within sandbox \"be1a54e00970d33eb892b1e002fa7cd43a625f56ad9bd4a27741d874f2c2b798\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ad1cdc10cc44abd10581d461cfcd9777cb92f598df8712cf736412fdf14e62d\"" Aug 13 00:16:54.725252 containerd[1579]: time="2025-08-13T00:16:54.725222595Z" level=info msg="StartContainer for \"0ad1cdc10cc44abd10581d461cfcd9777cb92f598df8712cf736412fdf14e62d\"" Aug 13 00:16:54.726412 containerd[1579]: time="2025-08-13T00:16:54.726373072Z" level=info msg="connecting to shim 0ad1cdc10cc44abd10581d461cfcd9777cb92f598df8712cf736412fdf14e62d" address="unix:///run/containerd/s/818a32c6f5bed0f434c7df742461b9124f95bd00b352835998ca08aff203cbd0" protocol=ttrpc version=3 Aug 13 00:16:54.727688 containerd[1579]: time="2025-08-13T00:16:54.727639880Z" level=info msg="CreateContainer within sandbox \"b1809508c39d8bb9437dea53e8ce0a7f70d529a37747ad9a6596617cc3002aff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2d413fdc8c663fad36476d199eeb766f40a9f185ba02092280307802bcf0769b\"" Aug 13 00:16:54.728815 containerd[1579]: time="2025-08-13T00:16:54.728787752Z" level=info msg="StartContainer for \"2d413fdc8c663fad36476d199eeb766f40a9f185ba02092280307802bcf0769b\"" Aug 13 00:16:54.729716 containerd[1579]: time="2025-08-13T00:16:54.729688607Z" level=info msg="connecting to shim 2d413fdc8c663fad36476d199eeb766f40a9f185ba02092280307802bcf0769b" address="unix:///run/containerd/s/0269465aa285ed445c35d790ac602617035501443fa139305cd190be6a3fc07c" protocol=ttrpc version=3 Aug 13 00:16:54.730674 containerd[1579]: time="2025-08-13T00:16:54.730629958Z" level=info msg="CreateContainer within sandbox \"586b76001550dbe67de9565671545c83f4dd9b1517f04929d9d809dbe6a09360\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7b0c468d053347b9260e43aee0512bd1d3a463907cb2a0cc4e607b065e226b0d\"" Aug 13 00:16:54.731350 containerd[1579]: time="2025-08-13T00:16:54.731330194Z" level=info msg="StartContainer for \"7b0c468d053347b9260e43aee0512bd1d3a463907cb2a0cc4e607b065e226b0d\"" Aug 13 00:16:54.732601 containerd[1579]: time="2025-08-13T00:16:54.732579719Z" level=info msg="connecting to shim 7b0c468d053347b9260e43aee0512bd1d3a463907cb2a0cc4e607b065e226b0d" address="unix:///run/containerd/s/9950847e34f2f0582d9546aec8c53b7c70d92dd854c710302cb7fabff055153b" protocol=ttrpc version=3 Aug 13 00:16:54.753792 systemd[1]: Started cri-containerd-0ad1cdc10cc44abd10581d461cfcd9777cb92f598df8712cf736412fdf14e62d.scope - libcontainer container 0ad1cdc10cc44abd10581d461cfcd9777cb92f598df8712cf736412fdf14e62d. Aug 13 00:16:54.755063 kubelet[2363]: E0813 00:16:54.755002 2363 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:16:54.755059 systemd[1]: Started cri-containerd-2d413fdc8c663fad36476d199eeb766f40a9f185ba02092280307802bcf0769b.scope - libcontainer container 2d413fdc8c663fad36476d199eeb766f40a9f185ba02092280307802bcf0769b. Aug 13 00:16:54.759738 systemd[1]: Started cri-containerd-7b0c468d053347b9260e43aee0512bd1d3a463907cb2a0cc4e607b065e226b0d.scope - libcontainer container 7b0c468d053347b9260e43aee0512bd1d3a463907cb2a0cc4e607b065e226b0d. Aug 13 00:16:54.817696 containerd[1579]: time="2025-08-13T00:16:54.817614694Z" level=info msg="StartContainer for \"2d413fdc8c663fad36476d199eeb766f40a9f185ba02092280307802bcf0769b\" returns successfully" Aug 13 00:16:54.818684 containerd[1579]: time="2025-08-13T00:16:54.818639513Z" level=info msg="StartContainer for \"0ad1cdc10cc44abd10581d461cfcd9777cb92f598df8712cf736412fdf14e62d\" returns successfully" Aug 13 00:16:54.825051 containerd[1579]: time="2025-08-13T00:16:54.825004920Z" level=info msg="StartContainer for \"7b0c468d053347b9260e43aee0512bd1d3a463907cb2a0cc4e607b065e226b0d\" returns successfully" Aug 13 00:16:54.859185 update_engine[1561]: I20250813 00:16:54.857710 1561 update_attempter.cc:509] Updating boot flags... Aug 13 00:16:55.002556 kubelet[2363]: E0813 00:16:55.000975 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:55.002556 kubelet[2363]: E0813 00:16:55.001092 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:55.017234 kubelet[2363]: E0813 00:16:55.017090 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:55.019979 kubelet[2363]: E0813 00:16:55.019724 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:55.022951 kubelet[2363]: E0813 00:16:55.022932 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:55.023327 kubelet[2363]: E0813 00:16:55.023292 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:55.513068 kubelet[2363]: I0813 00:16:55.512944 2363 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:16:56.018324 kubelet[2363]: E0813 00:16:56.018155 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:56.018324 kubelet[2363]: E0813 00:16:56.018320 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:56.019326 kubelet[2363]: E0813 00:16:56.019307 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:56.019442 kubelet[2363]: E0813 00:16:56.019408 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:56.019524 kubelet[2363]: E0813 00:16:56.019498 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:56.019648 kubelet[2363]: E0813 00:16:56.019603 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:56.808691 kubelet[2363]: E0813 00:16:56.808288 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:16:56.966717 kubelet[2363]: I0813 00:16:56.966672 2363 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:16:56.966717 kubelet[2363]: E0813 00:16:56.966719 2363 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:16:56.976671 kubelet[2363]: E0813 00:16:56.976616 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:16:57.020461 kubelet[2363]: E0813 00:16:57.020414 2363 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:16:57.020916 kubelet[2363]: E0813 00:16:57.020565 2363 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:57.077055 kubelet[2363]: E0813 00:16:57.076911 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:16:57.177127 kubelet[2363]: E0813 00:16:57.177051 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:16:57.277933 kubelet[2363]: E0813 00:16:57.277863 2363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:16:57.455790 kubelet[2363]: I0813 00:16:57.455725 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:16:57.462079 kubelet[2363]: E0813 00:16:57.462032 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:16:57.462079 kubelet[2363]: I0813 00:16:57.462063 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:57.463626 kubelet[2363]: E0813 00:16:57.463598 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:57.463626 kubelet[2363]: I0813 00:16:57.463623 2363 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:57.465054 kubelet[2363]: E0813 00:16:57.465002 2363 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:57.947855 kubelet[2363]: I0813 00:16:57.947790 2363 apiserver.go:52] "Watching apiserver" Aug 13 00:16:57.957813 kubelet[2363]: I0813 00:16:57.957785 2363 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:16:58.761489 systemd[1]: Reload requested from client PID 2669 ('systemctl') (unit session-7.scope)... Aug 13 00:16:58.761505 systemd[1]: Reloading... Aug 13 00:16:58.863745 zram_generator::config[2711]: No configuration found. Aug 13 00:16:59.046231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:16:59.177746 systemd[1]: Reloading finished in 415 ms. Aug 13 00:16:59.209081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:59.235946 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:16:59.236239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:59.236298 systemd[1]: kubelet.service: Consumed 1.666s CPU time, 131.7M memory peak. Aug 13 00:16:59.238328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:59.465131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:59.470716 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:16:59.516941 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:59.516941 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:16:59.516941 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:59.517372 kubelet[2757]: I0813 00:16:59.517010 2757 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:16:59.525449 kubelet[2757]: I0813 00:16:59.525403 2757 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:16:59.525449 kubelet[2757]: I0813 00:16:59.525436 2757 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:16:59.525713 kubelet[2757]: I0813 00:16:59.525639 2757 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:16:59.526883 kubelet[2757]: I0813 00:16:59.526851 2757 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:16:59.529084 kubelet[2757]: I0813 00:16:59.529048 2757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:16:59.533292 kubelet[2757]: I0813 00:16:59.533263 2757 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:16:59.539996 kubelet[2757]: I0813 00:16:59.539961 2757 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:16:59.540272 kubelet[2757]: I0813 00:16:59.540227 2757 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:16:59.540459 kubelet[2757]: I0813 00:16:59.540261 2757 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:16:59.540549 kubelet[2757]: I0813 00:16:59.540463 2757 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:16:59.540549 kubelet[2757]: I0813 00:16:59.540473 2757 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:16:59.540549 kubelet[2757]: I0813 00:16:59.540518 2757 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:59.540760 kubelet[2757]: I0813 00:16:59.540736 2757 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:16:59.540760 kubelet[2757]: I0813 00:16:59.540761 2757 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:16:59.540832 kubelet[2757]: I0813 00:16:59.540786 2757 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:16:59.540907 kubelet[2757]: I0813 00:16:59.540873 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:16:59.543682 kubelet[2757]: I0813 00:16:59.543476 2757 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:16:59.544194 kubelet[2757]: I0813 00:16:59.544177 2757 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:16:59.548673 kubelet[2757]: I0813 00:16:59.548380 2757 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:16:59.548673 kubelet[2757]: I0813 00:16:59.548462 2757 server.go:1289] "Started kubelet" Aug 13 00:16:59.551946 kubelet[2757]: I0813 00:16:59.551798 2757 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:16:59.553044 kubelet[2757]: I0813 00:16:59.553004 2757 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:16:59.555933 kubelet[2757]: I0813 00:16:59.555419 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:16:59.556492 kubelet[2757]: I0813 00:16:59.556447 2757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:16:59.558235 kubelet[2757]: I0813 00:16:59.558170 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:16:59.558498 kubelet[2757]: I0813 00:16:59.558467 2757 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:16:59.559616 kubelet[2757]: E0813 00:16:59.558945 2757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:16:59.559616 kubelet[2757]: I0813 00:16:59.559012 2757 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:16:59.559616 kubelet[2757]: I0813 00:16:59.559269 2757 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:16:59.559616 kubelet[2757]: I0813 00:16:59.559389 2757 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:16:59.564229 kubelet[2757]: I0813 00:16:59.564211 2757 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:16:59.564292 kubelet[2757]: I0813 00:16:59.564283 2757 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:16:59.564410 kubelet[2757]: I0813 00:16:59.564392 2757 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:16:59.564527 kubelet[2757]: E0813 00:16:59.564488 2757 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:16:59.577101 kubelet[2757]: I0813 00:16:59.577041 2757 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:16:59.579270 kubelet[2757]: I0813 00:16:59.579245 2757 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:16:59.579326 kubelet[2757]: I0813 00:16:59.579277 2757 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:16:59.579326 kubelet[2757]: I0813 00:16:59.579305 2757 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:16:59.579326 kubelet[2757]: I0813 00:16:59.579312 2757 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:16:59.579402 kubelet[2757]: E0813 00:16:59.579355 2757 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:16:59.603457 kubelet[2757]: I0813 00:16:59.603417 2757 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:16:59.603457 kubelet[2757]: I0813 00:16:59.603441 2757 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:16:59.603457 kubelet[2757]: I0813 00:16:59.603468 2757 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:59.603671 kubelet[2757]: I0813 00:16:59.603630 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:16:59.603717 kubelet[2757]: I0813 00:16:59.603672 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:16:59.603717 kubelet[2757]: I0813 00:16:59.603705 2757 policy_none.go:49] "None policy: Start" Aug 13 00:16:59.603717 kubelet[2757]: I0813 00:16:59.603716 2757 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:16:59.603790 kubelet[2757]: I0813 00:16:59.603729 2757 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:16:59.603861 kubelet[2757]: I0813 00:16:59.603842 2757 state_mem.go:75] "Updated machine memory state" Aug 13 00:16:59.608676 kubelet[2757]: E0813 00:16:59.608632 2757 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:16:59.608942 kubelet[2757]: I0813 00:16:59.608923 2757 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:16:59.608984 kubelet[2757]: I0813 00:16:59.608939 2757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:16:59.609288 kubelet[2757]: I0813 00:16:59.609148 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:16:59.610771 kubelet[2757]: E0813 00:16:59.610746 2757 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:16:59.680255 kubelet[2757]: I0813 00:16:59.680202 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:59.680533 kubelet[2757]: I0813 00:16:59.680285 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:59.680761 kubelet[2757]: I0813 00:16:59.680359 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:16:59.714778 kubelet[2757]: I0813 00:16:59.714736 2757 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:16:59.721226 kubelet[2757]: I0813 00:16:59.721095 2757 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 00:16:59.721226 kubelet[2757]: I0813 00:16:59.721191 2757 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:16:59.761316 kubelet[2757]: I0813 00:16:59.761259 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:59.761316 kubelet[2757]: I0813 00:16:59.761297 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:59.761511 kubelet[2757]: I0813 00:16:59.761386 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98f966b8f92114dec57c74a489af7ff0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"98f966b8f92114dec57c74a489af7ff0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:59.761511 kubelet[2757]: I0813 00:16:59.761424 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98f966b8f92114dec57c74a489af7ff0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"98f966b8f92114dec57c74a489af7ff0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:59.761511 kubelet[2757]: I0813 00:16:59.761462 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:59.761511 kubelet[2757]: I0813 00:16:59.761484 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:59.761607 kubelet[2757]: I0813 00:16:59.761512 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:16:59.761607 kubelet[2757]: I0813 00:16:59.761541 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98f966b8f92114dec57c74a489af7ff0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"98f966b8f92114dec57c74a489af7ff0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:16:59.761607 kubelet[2757]: I0813 00:16:59.761559 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:16:59.985573 kubelet[2757]: E0813 00:16:59.985414 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:59.988596 kubelet[2757]: E0813 00:16:59.988563 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:59.988800 kubelet[2757]: E0813 00:16:59.988633 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:00.542386 kubelet[2757]: I0813 00:17:00.542292 2757 apiserver.go:52] "Watching apiserver" Aug 13 00:17:00.559808 kubelet[2757]: I0813 00:17:00.559769 2757 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:17:00.597524 kubelet[2757]: E0813 00:17:00.597478 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:00.597875 kubelet[2757]: I0813 00:17:00.597851 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:17:00.598115 kubelet[2757]: I0813 00:17:00.598097 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:17:00.605532 kubelet[2757]: E0813 00:17:00.605488 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:17:00.605962 kubelet[2757]: E0813 00:17:00.605694 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:00.605962 kubelet[2757]: E0813 00:17:00.605497 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:17:00.605962 kubelet[2757]: E0813 00:17:00.605835 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:00.621568 kubelet[2757]: I0813 00:17:00.621485 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6214616259999999 podStartE2EDuration="1.621461626s" podCreationTimestamp="2025-08-13 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:00.615487242 +0000 UTC m=+1.140037099" watchObservedRunningTime="2025-08-13 00:17:00.621461626 +0000 UTC m=+1.146011483" Aug 13 00:17:00.621786 kubelet[2757]: I0813 00:17:00.621620 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.62161288 podStartE2EDuration="1.62161288s" podCreationTimestamp="2025-08-13 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:00.621613592 +0000 UTC m=+1.146163449" watchObservedRunningTime="2025-08-13 00:17:00.62161288 +0000 UTC m=+1.146162737" Aug 13 00:17:00.627905 kubelet[2757]: I0813 00:17:00.627859 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.62784478 podStartE2EDuration="1.62784478s" podCreationTimestamp="2025-08-13 00:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:00.62772306 +0000 UTC m=+1.152272917" watchObservedRunningTime="2025-08-13 00:17:00.62784478 +0000 UTC m=+1.152394637" Aug 13 00:17:01.599694 kubelet[2757]: E0813 00:17:01.599299 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:01.600124 kubelet[2757]: E0813 00:17:01.599915 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:02.949722 kubelet[2757]: E0813 00:17:02.949648 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:04.229540 kubelet[2757]: I0813 00:17:04.229492 2757 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:17:04.230076 containerd[1579]: time="2025-08-13T00:17:04.229889444Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:17:04.230340 kubelet[2757]: I0813 00:17:04.230083 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:17:04.439463 kubelet[2757]: E0813 00:17:04.439419 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:04.604498 kubelet[2757]: E0813 00:17:04.604445 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:05.275724 systemd[1]: Created slice kubepods-besteffort-podac3e62f7_752d_46d4_85fc_5589d7c3224b.slice - libcontainer container kubepods-besteffort-podac3e62f7_752d_46d4_85fc_5589d7c3224b.slice. Aug 13 00:17:05.295216 kubelet[2757]: I0813 00:17:05.295179 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac3e62f7-752d-46d4-85fc-5589d7c3224b-kube-proxy\") pod \"kube-proxy-mfkjk\" (UID: \"ac3e62f7-752d-46d4-85fc-5589d7c3224b\") " pod="kube-system/kube-proxy-mfkjk" Aug 13 00:17:05.295216 kubelet[2757]: I0813 00:17:05.295214 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac3e62f7-752d-46d4-85fc-5589d7c3224b-xtables-lock\") pod \"kube-proxy-mfkjk\" (UID: \"ac3e62f7-752d-46d4-85fc-5589d7c3224b\") " pod="kube-system/kube-proxy-mfkjk" Aug 13 00:17:05.295216 kubelet[2757]: I0813 00:17:05.295231 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac3e62f7-752d-46d4-85fc-5589d7c3224b-lib-modules\") pod \"kube-proxy-mfkjk\" (UID: \"ac3e62f7-752d-46d4-85fc-5589d7c3224b\") " pod="kube-system/kube-proxy-mfkjk" Aug 13 00:17:05.295637 kubelet[2757]: I0813 00:17:05.295245 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shd74\" (UniqueName: \"kubernetes.io/projected/ac3e62f7-752d-46d4-85fc-5589d7c3224b-kube-api-access-shd74\") pod \"kube-proxy-mfkjk\" (UID: \"ac3e62f7-752d-46d4-85fc-5589d7c3224b\") " pod="kube-system/kube-proxy-mfkjk" Aug 13 00:17:05.345418 systemd[1]: Created slice kubepods-besteffort-podcc9be4bf_9eb7_4d21_964b_146b77b8a213.slice - libcontainer container kubepods-besteffort-podcc9be4bf_9eb7_4d21_964b_146b77b8a213.slice. Aug 13 00:17:05.395871 kubelet[2757]: I0813 00:17:05.395818 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc9be4bf-9eb7-4d21-964b-146b77b8a213-var-lib-calico\") pod \"tigera-operator-747864d56d-lvv6h\" (UID: \"cc9be4bf-9eb7-4d21-964b-146b77b8a213\") " pod="tigera-operator/tigera-operator-747864d56d-lvv6h" Aug 13 00:17:05.396028 kubelet[2757]: I0813 00:17:05.395894 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljh4\" (UniqueName: \"kubernetes.io/projected/cc9be4bf-9eb7-4d21-964b-146b77b8a213-kube-api-access-6ljh4\") pod \"tigera-operator-747864d56d-lvv6h\" (UID: \"cc9be4bf-9eb7-4d21-964b-146b77b8a213\") " pod="tigera-operator/tigera-operator-747864d56d-lvv6h" Aug 13 00:17:05.589072 kubelet[2757]: E0813 00:17:05.588939 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:05.589637 containerd[1579]: time="2025-08-13T00:17:05.589575827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mfkjk,Uid:ac3e62f7-752d-46d4-85fc-5589d7c3224b,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:05.609420 containerd[1579]: time="2025-08-13T00:17:05.609364091Z" level=info msg="connecting to shim 80ffeafcee1a0f921af39fb3cd79a1d6ad3c8933e9b4b19b6982c4a56f01daf6" address="unix:///run/containerd/s/6fabf474a194aebc94dea9375dc10193ec697c652a734b2a4dc2ff12991d7964" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:05.641849 systemd[1]: Started cri-containerd-80ffeafcee1a0f921af39fb3cd79a1d6ad3c8933e9b4b19b6982c4a56f01daf6.scope - libcontainer container 80ffeafcee1a0f921af39fb3cd79a1d6ad3c8933e9b4b19b6982c4a56f01daf6. Aug 13 00:17:05.650214 containerd[1579]: time="2025-08-13T00:17:05.650157721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lvv6h,Uid:cc9be4bf-9eb7-4d21-964b-146b77b8a213,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:17:05.672795 containerd[1579]: time="2025-08-13T00:17:05.672685876Z" level=info msg="connecting to shim 77ef96c528923d9cdd49a0c30beaf4491251d62c0f163b4d243e1c7baf00e7da" address="unix:///run/containerd/s/f6d1f1dcf48ff68bebebf8cfb539d275da520c8e7b0d4cf13b0663ff9f9d008f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:05.677083 containerd[1579]: time="2025-08-13T00:17:05.677036912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mfkjk,Uid:ac3e62f7-752d-46d4-85fc-5589d7c3224b,Namespace:kube-system,Attempt:0,} returns sandbox id \"80ffeafcee1a0f921af39fb3cd79a1d6ad3c8933e9b4b19b6982c4a56f01daf6\"" Aug 13 00:17:05.679780 kubelet[2757]: E0813 00:17:05.679759 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:05.703811 systemd[1]: Started cri-containerd-77ef96c528923d9cdd49a0c30beaf4491251d62c0f163b4d243e1c7baf00e7da.scope - libcontainer container 77ef96c528923d9cdd49a0c30beaf4491251d62c0f163b4d243e1c7baf00e7da. Aug 13 00:17:05.729544 containerd[1579]: time="2025-08-13T00:17:05.729140436Z" level=info msg="CreateContainer within sandbox \"80ffeafcee1a0f921af39fb3cd79a1d6ad3c8933e9b4b19b6982c4a56f01daf6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:17:05.796499 containerd[1579]: time="2025-08-13T00:17:05.796394619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lvv6h,Uid:cc9be4bf-9eb7-4d21-964b-146b77b8a213,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"77ef96c528923d9cdd49a0c30beaf4491251d62c0f163b4d243e1c7baf00e7da\"" Aug 13 00:17:05.798127 containerd[1579]: time="2025-08-13T00:17:05.798081135Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:17:05.807546 containerd[1579]: time="2025-08-13T00:17:05.807489267Z" level=info msg="Container 2ab40140e636c04f1dcd4e57c8057d9410e666630f22e2bf81d57276403ca2a2: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:05.818840 containerd[1579]: time="2025-08-13T00:17:05.818786748Z" level=info msg="CreateContainer within sandbox \"80ffeafcee1a0f921af39fb3cd79a1d6ad3c8933e9b4b19b6982c4a56f01daf6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ab40140e636c04f1dcd4e57c8057d9410e666630f22e2bf81d57276403ca2a2\"" Aug 13 00:17:05.819576 containerd[1579]: time="2025-08-13T00:17:05.819546498Z" level=info msg="StartContainer for \"2ab40140e636c04f1dcd4e57c8057d9410e666630f22e2bf81d57276403ca2a2\"" Aug 13 00:17:05.821821 containerd[1579]: time="2025-08-13T00:17:05.821347190Z" level=info msg="connecting to shim 2ab40140e636c04f1dcd4e57c8057d9410e666630f22e2bf81d57276403ca2a2" address="unix:///run/containerd/s/6fabf474a194aebc94dea9375dc10193ec697c652a734b2a4dc2ff12991d7964" protocol=ttrpc version=3 Aug 13 00:17:05.850813 systemd[1]: Started cri-containerd-2ab40140e636c04f1dcd4e57c8057d9410e666630f22e2bf81d57276403ca2a2.scope - libcontainer container 2ab40140e636c04f1dcd4e57c8057d9410e666630f22e2bf81d57276403ca2a2. Aug 13 00:17:05.896319 containerd[1579]: time="2025-08-13T00:17:05.896269967Z" level=info msg="StartContainer for \"2ab40140e636c04f1dcd4e57c8057d9410e666630f22e2bf81d57276403ca2a2\" returns successfully" Aug 13 00:17:06.612616 kubelet[2757]: E0813 00:17:06.612578 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:06.622451 kubelet[2757]: I0813 00:17:06.622381 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mfkjk" podStartSLOduration=1.622359855 podStartE2EDuration="1.622359855s" podCreationTimestamp="2025-08-13 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:06.622201395 +0000 UTC m=+7.146751273" watchObservedRunningTime="2025-08-13 00:17:06.622359855 +0000 UTC m=+7.146909712" Aug 13 00:17:07.592793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899861483.mount: Deactivated successfully. Aug 13 00:17:09.439103 kubelet[2757]: E0813 00:17:09.439025 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:09.494502 containerd[1579]: time="2025-08-13T00:17:09.494443083Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:09.495288 containerd[1579]: time="2025-08-13T00:17:09.495254651Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 00:17:09.496545 containerd[1579]: time="2025-08-13T00:17:09.496519971Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:09.498569 containerd[1579]: time="2025-08-13T00:17:09.498496130Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:09.499126 containerd[1579]: time="2025-08-13T00:17:09.499082483Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.700970249s" Aug 13 00:17:09.499171 containerd[1579]: time="2025-08-13T00:17:09.499124291Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:17:09.503384 containerd[1579]: time="2025-08-13T00:17:09.503344283Z" level=info msg="CreateContainer within sandbox \"77ef96c528923d9cdd49a0c30beaf4491251d62c0f163b4d243e1c7baf00e7da\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:17:09.543944 containerd[1579]: time="2025-08-13T00:17:09.543878735Z" level=info msg="Container 85bd70260cdb0abade1d9c51cdd6b09c69a73bff17813e58de9c133ac1e1cf6d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:09.569588 containerd[1579]: time="2025-08-13T00:17:09.569545151Z" level=info msg="CreateContainer within sandbox \"77ef96c528923d9cdd49a0c30beaf4491251d62c0f163b4d243e1c7baf00e7da\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"85bd70260cdb0abade1d9c51cdd6b09c69a73bff17813e58de9c133ac1e1cf6d\"" Aug 13 00:17:09.570033 containerd[1579]: time="2025-08-13T00:17:09.570006188Z" level=info msg="StartContainer for \"85bd70260cdb0abade1d9c51cdd6b09c69a73bff17813e58de9c133ac1e1cf6d\"" Aug 13 00:17:09.571242 containerd[1579]: time="2025-08-13T00:17:09.571105437Z" level=info msg="connecting to shim 85bd70260cdb0abade1d9c51cdd6b09c69a73bff17813e58de9c133ac1e1cf6d" address="unix:///run/containerd/s/f6d1f1dcf48ff68bebebf8cfb539d275da520c8e7b0d4cf13b0663ff9f9d008f" protocol=ttrpc version=3 Aug 13 00:17:09.619001 kubelet[2757]: E0813 00:17:09.618963 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:09.626904 systemd[1]: Started cri-containerd-85bd70260cdb0abade1d9c51cdd6b09c69a73bff17813e58de9c133ac1e1cf6d.scope - libcontainer container 85bd70260cdb0abade1d9c51cdd6b09c69a73bff17813e58de9c133ac1e1cf6d. Aug 13 00:17:09.663903 containerd[1579]: time="2025-08-13T00:17:09.663861283Z" level=info msg="StartContainer for \"85bd70260cdb0abade1d9c51cdd6b09c69a73bff17813e58de9c133ac1e1cf6d\" returns successfully" Aug 13 00:17:10.625180 kubelet[2757]: E0813 00:17:10.624851 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:10.632771 kubelet[2757]: I0813 00:17:10.632683 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-lvv6h" podStartSLOduration=1.930394466 podStartE2EDuration="5.632644494s" podCreationTimestamp="2025-08-13 00:17:05 +0000 UTC" firstStartedPulling="2025-08-13 00:17:05.797609998 +0000 UTC m=+6.322159855" lastFinishedPulling="2025-08-13 00:17:09.499860026 +0000 UTC m=+10.024409883" observedRunningTime="2025-08-13 00:17:10.632461369 +0000 UTC m=+11.157011227" watchObservedRunningTime="2025-08-13 00:17:10.632644494 +0000 UTC m=+11.157194351" Aug 13 00:17:12.963773 kubelet[2757]: E0813 00:17:12.963732 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:14.767321 sudo[1779]: pam_unix(sudo:session): session closed for user root Aug 13 00:17:14.773955 sshd[1778]: Connection closed by 10.0.0.1 port 54814 Aug 13 00:17:14.772782 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:14.776444 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:17:14.777503 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:54814.service: Deactivated successfully. Aug 13 00:17:14.782148 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:17:14.782623 systemd[1]: session-7.scope: Consumed 5.916s CPU time, 225.7M memory peak. Aug 13 00:17:14.786946 systemd-logind[1557]: Removed session 7. Aug 13 00:17:17.216591 systemd[1]: Created slice kubepods-besteffort-podecbe7677_bbd7_4362_8edb_01b62d6ff508.slice - libcontainer container kubepods-besteffort-podecbe7677_bbd7_4362_8edb_01b62d6ff508.slice. Aug 13 00:17:17.269495 kubelet[2757]: I0813 00:17:17.269318 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecbe7677-bbd7-4362-8edb-01b62d6ff508-tigera-ca-bundle\") pod \"calico-typha-b5db85bd8-wkt4r\" (UID: \"ecbe7677-bbd7-4362-8edb-01b62d6ff508\") " pod="calico-system/calico-typha-b5db85bd8-wkt4r" Aug 13 00:17:17.269495 kubelet[2757]: I0813 00:17:17.269395 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj7pw\" (UniqueName: \"kubernetes.io/projected/ecbe7677-bbd7-4362-8edb-01b62d6ff508-kube-api-access-cj7pw\") pod \"calico-typha-b5db85bd8-wkt4r\" (UID: \"ecbe7677-bbd7-4362-8edb-01b62d6ff508\") " pod="calico-system/calico-typha-b5db85bd8-wkt4r" Aug 13 00:17:17.269495 kubelet[2757]: I0813 00:17:17.269413 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ecbe7677-bbd7-4362-8edb-01b62d6ff508-typha-certs\") pod \"calico-typha-b5db85bd8-wkt4r\" (UID: \"ecbe7677-bbd7-4362-8edb-01b62d6ff508\") " pod="calico-system/calico-typha-b5db85bd8-wkt4r" Aug 13 00:17:17.425340 systemd[1]: Created slice kubepods-besteffort-podd8c3fd38_22dd_417e_8145_b3c9f95ba6cb.slice - libcontainer container kubepods-besteffort-podd8c3fd38_22dd_417e_8145_b3c9f95ba6cb.slice. Aug 13 00:17:17.470538 kubelet[2757]: I0813 00:17:17.470422 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-policysync\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471060 kubelet[2757]: I0813 00:17:17.470728 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-var-lib-calico\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471060 kubelet[2757]: I0813 00:17:17.470751 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-cni-bin-dir\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471060 kubelet[2757]: I0813 00:17:17.470791 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-var-run-calico\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471060 kubelet[2757]: I0813 00:17:17.470804 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-xtables-lock\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471060 kubelet[2757]: I0813 00:17:17.470824 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltfvz\" (UniqueName: \"kubernetes.io/projected/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-kube-api-access-ltfvz\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471218 kubelet[2757]: I0813 00:17:17.470845 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-cni-net-dir\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471218 kubelet[2757]: I0813 00:17:17.470898 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-flexvol-driver-host\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471218 kubelet[2757]: I0813 00:17:17.470913 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-cni-log-dir\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471218 kubelet[2757]: I0813 00:17:17.470955 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-lib-modules\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471218 kubelet[2757]: I0813 00:17:17.470972 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-node-certs\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.471341 kubelet[2757]: I0813 00:17:17.471003 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8c3fd38-22dd-417e-8145-b3c9f95ba6cb-tigera-ca-bundle\") pod \"calico-node-km9sc\" (UID: \"d8c3fd38-22dd-417e-8145-b3c9f95ba6cb\") " pod="calico-system/calico-node-km9sc" Aug 13 00:17:17.521558 kubelet[2757]: E0813 00:17:17.521471 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:17.522257 containerd[1579]: time="2025-08-13T00:17:17.522218517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b5db85bd8-wkt4r,Uid:ecbe7677-bbd7-4362-8edb-01b62d6ff508,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:17.567483 containerd[1579]: time="2025-08-13T00:17:17.567411724Z" level=info msg="connecting to shim 2b199d26b7e1cd960933ec0f882ea8a9f762486b393540805c653df4f656c974" address="unix:///run/containerd/s/722317106342c8a58a9dc81350bcd7b7b59b8af3cd4e329da589ddb2abb84cf1" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:17.584527 kubelet[2757]: E0813 00:17:17.583879 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.584527 kubelet[2757]: W0813 00:17:17.584110 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.585315 kubelet[2757]: E0813 00:17:17.585276 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.585845 kubelet[2757]: E0813 00:17:17.585818 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.585845 kubelet[2757]: W0813 00:17:17.585835 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.585845 kubelet[2757]: E0813 00:17:17.585846 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.588981 kubelet[2757]: E0813 00:17:17.588963 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.588981 kubelet[2757]: W0813 00:17:17.588976 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.589071 kubelet[2757]: E0813 00:17:17.588988 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.606798 systemd[1]: Started cri-containerd-2b199d26b7e1cd960933ec0f882ea8a9f762486b393540805c653df4f656c974.scope - libcontainer container 2b199d26b7e1cd960933ec0f882ea8a9f762486b393540805c653df4f656c974. Aug 13 00:17:17.697175 containerd[1579]: time="2025-08-13T00:17:17.697124292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b5db85bd8-wkt4r,Uid:ecbe7677-bbd7-4362-8edb-01b62d6ff508,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b199d26b7e1cd960933ec0f882ea8a9f762486b393540805c653df4f656c974\"" Aug 13 00:17:17.697938 kubelet[2757]: E0813 00:17:17.697912 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:17.698757 containerd[1579]: time="2025-08-13T00:17:17.698717446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:17:17.729419 containerd[1579]: time="2025-08-13T00:17:17.729172280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km9sc,Uid:d8c3fd38-22dd-417e-8145-b3c9f95ba6cb,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:17.730649 kubelet[2757]: E0813 00:17:17.730508 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:17.763762 containerd[1579]: time="2025-08-13T00:17:17.763681632Z" level=info msg="connecting to shim 172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506" address="unix:///run/containerd/s/c0d390c841f5bfff6f7ce16d3bba2cd6e8d9fc8bd5c330b52a616653f924e87b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:17.765646 kubelet[2757]: E0813 00:17:17.765600 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.765739 kubelet[2757]: W0813 00:17:17.765640 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.765739 kubelet[2757]: E0813 00:17:17.765696 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.765961 kubelet[2757]: E0813 00:17:17.765936 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.765961 kubelet[2757]: W0813 00:17:17.765950 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.765961 kubelet[2757]: E0813 00:17:17.765961 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.766165 kubelet[2757]: E0813 00:17:17.766151 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.766165 kubelet[2757]: W0813 00:17:17.766160 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.766234 kubelet[2757]: E0813 00:17:17.766169 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.766729 kubelet[2757]: E0813 00:17:17.766710 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.766729 kubelet[2757]: W0813 00:17:17.766722 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.766729 kubelet[2757]: E0813 00:17:17.766732 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.767013 kubelet[2757]: E0813 00:17:17.766984 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.767013 kubelet[2757]: W0813 00:17:17.766995 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.767013 kubelet[2757]: E0813 00:17:17.767003 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.767237 kubelet[2757]: E0813 00:17:17.767208 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.767355 kubelet[2757]: W0813 00:17:17.767294 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.767492 kubelet[2757]: E0813 00:17:17.767317 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.767894 kubelet[2757]: E0813 00:17:17.767801 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.767894 kubelet[2757]: W0813 00:17:17.767818 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.767894 kubelet[2757]: E0813 00:17:17.767830 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.768190 kubelet[2757]: E0813 00:17:17.768151 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.768263 kubelet[2757]: W0813 00:17:17.768234 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.768263 kubelet[2757]: E0813 00:17:17.768259 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.768789 kubelet[2757]: E0813 00:17:17.768620 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.768789 kubelet[2757]: W0813 00:17:17.768637 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.768789 kubelet[2757]: E0813 00:17:17.768649 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.768995 kubelet[2757]: E0813 00:17:17.768973 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.769051 kubelet[2757]: W0813 00:17:17.769025 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.769051 kubelet[2757]: E0813 00:17:17.769037 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.769366 kubelet[2757]: E0813 00:17:17.769314 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.769366 kubelet[2757]: W0813 00:17:17.769350 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.769366 kubelet[2757]: E0813 00:17:17.769362 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.769644 kubelet[2757]: E0813 00:17:17.769618 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.769644 kubelet[2757]: W0813 00:17:17.769631 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.769644 kubelet[2757]: E0813 00:17:17.769641 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.770091 kubelet[2757]: E0813 00:17:17.770066 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.770091 kubelet[2757]: W0813 00:17:17.770079 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.770091 kubelet[2757]: E0813 00:17:17.770088 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.770948 kubelet[2757]: E0813 00:17:17.770923 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.770948 kubelet[2757]: W0813 00:17:17.770940 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.770948 kubelet[2757]: E0813 00:17:17.770952 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.771226 kubelet[2757]: E0813 00:17:17.771198 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.771226 kubelet[2757]: W0813 00:17:17.771218 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.771298 kubelet[2757]: E0813 00:17:17.771231 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.772709 kubelet[2757]: E0813 00:17:17.771760 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.772709 kubelet[2757]: W0813 00:17:17.771777 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.772709 kubelet[2757]: E0813 00:17:17.771788 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.772709 kubelet[2757]: E0813 00:17:17.772220 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.772709 kubelet[2757]: W0813 00:17:17.772232 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.772709 kubelet[2757]: E0813 00:17:17.772244 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.772709 kubelet[2757]: E0813 00:17:17.772482 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.772709 kubelet[2757]: W0813 00:17:17.772491 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.772709 kubelet[2757]: E0813 00:17:17.772501 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.772961 kubelet[2757]: E0813 00:17:17.772798 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.772961 kubelet[2757]: W0813 00:17:17.772808 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.772961 kubelet[2757]: E0813 00:17:17.772817 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.773830 kubelet[2757]: E0813 00:17:17.773790 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.773830 kubelet[2757]: W0813 00:17:17.773807 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.773830 kubelet[2757]: E0813 00:17:17.773817 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.775139 kubelet[2757]: E0813 00:17:17.775102 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.775139 kubelet[2757]: W0813 00:17:17.775117 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.775139 kubelet[2757]: E0813 00:17:17.775127 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.775285 kubelet[2757]: I0813 00:17:17.775154 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5565\" (UniqueName: \"kubernetes.io/projected/3471d1d5-f5a3-4dec-8445-1557d86e0087-kube-api-access-j5565\") pod \"csi-node-driver-wkrpf\" (UID: \"3471d1d5-f5a3-4dec-8445-1557d86e0087\") " pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:17.775524 kubelet[2757]: E0813 00:17:17.775478 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.775524 kubelet[2757]: W0813 00:17:17.775496 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.775524 kubelet[2757]: E0813 00:17:17.775507 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.775607 kubelet[2757]: I0813 00:17:17.775540 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3471d1d5-f5a3-4dec-8445-1557d86e0087-socket-dir\") pod \"csi-node-driver-wkrpf\" (UID: \"3471d1d5-f5a3-4dec-8445-1557d86e0087\") " pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:17.776006 kubelet[2757]: E0813 00:17:17.775940 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.776058 kubelet[2757]: W0813 00:17:17.775999 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.776058 kubelet[2757]: E0813 00:17:17.776025 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.776101 kubelet[2757]: I0813 00:17:17.776081 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3471d1d5-f5a3-4dec-8445-1557d86e0087-kubelet-dir\") pod \"csi-node-driver-wkrpf\" (UID: \"3471d1d5-f5a3-4dec-8445-1557d86e0087\") " pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:17.776560 kubelet[2757]: E0813 00:17:17.776540 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.776560 kubelet[2757]: W0813 00:17:17.776555 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.776618 kubelet[2757]: E0813 00:17:17.776565 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.776761 kubelet[2757]: I0813 00:17:17.776742 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3471d1d5-f5a3-4dec-8445-1557d86e0087-registration-dir\") pod \"csi-node-driver-wkrpf\" (UID: \"3471d1d5-f5a3-4dec-8445-1557d86e0087\") " pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:17.776995 kubelet[2757]: E0813 00:17:17.776972 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.776995 kubelet[2757]: W0813 00:17:17.776986 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.776995 kubelet[2757]: E0813 00:17:17.776997 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.777301 kubelet[2757]: E0813 00:17:17.777278 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.777301 kubelet[2757]: W0813 00:17:17.777294 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.777392 kubelet[2757]: E0813 00:17:17.777307 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.777634 kubelet[2757]: E0813 00:17:17.777614 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.777634 kubelet[2757]: W0813 00:17:17.777627 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.777634 kubelet[2757]: E0813 00:17:17.777636 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.777968 kubelet[2757]: E0813 00:17:17.777949 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.777968 kubelet[2757]: W0813 00:17:17.777962 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.778032 kubelet[2757]: E0813 00:17:17.777973 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.778590 kubelet[2757]: E0813 00:17:17.778370 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.778590 kubelet[2757]: W0813 00:17:17.778388 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.778590 kubelet[2757]: E0813 00:17:17.778400 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.778800 kubelet[2757]: E0813 00:17:17.778768 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.778800 kubelet[2757]: W0813 00:17:17.778795 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.778862 kubelet[2757]: E0813 00:17:17.778809 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.778958 kubelet[2757]: I0813 00:17:17.778935 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3471d1d5-f5a3-4dec-8445-1557d86e0087-varrun\") pod \"csi-node-driver-wkrpf\" (UID: \"3471d1d5-f5a3-4dec-8445-1557d86e0087\") " pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:17.779191 kubelet[2757]: E0813 00:17:17.779168 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.779191 kubelet[2757]: W0813 00:17:17.779190 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.779261 kubelet[2757]: E0813 00:17:17.779202 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.779584 kubelet[2757]: E0813 00:17:17.779566 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.779584 kubelet[2757]: W0813 00:17:17.779579 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.779646 kubelet[2757]: E0813 00:17:17.779611 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.779973 kubelet[2757]: E0813 00:17:17.779881 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.779973 kubelet[2757]: W0813 00:17:17.779897 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.779973 kubelet[2757]: E0813 00:17:17.779930 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.780305 kubelet[2757]: E0813 00:17:17.780285 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.780305 kubelet[2757]: W0813 00:17:17.780298 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.780305 kubelet[2757]: E0813 00:17:17.780308 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.780629 kubelet[2757]: E0813 00:17:17.780609 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.780629 kubelet[2757]: W0813 00:17:17.780622 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.780629 kubelet[2757]: E0813 00:17:17.780632 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.798841 systemd[1]: Started cri-containerd-172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506.scope - libcontainer container 172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506. Aug 13 00:17:17.831935 containerd[1579]: time="2025-08-13T00:17:17.831878042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km9sc,Uid:d8c3fd38-22dd-417e-8145-b3c9f95ba6cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506\"" Aug 13 00:17:17.881639 kubelet[2757]: E0813 00:17:17.881585 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.881639 kubelet[2757]: W0813 00:17:17.881611 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.881639 kubelet[2757]: E0813 00:17:17.881635 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.881881 kubelet[2757]: E0813 00:17:17.881846 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.881881 kubelet[2757]: W0813 00:17:17.881857 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.881881 kubelet[2757]: E0813 00:17:17.881878 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.882137 kubelet[2757]: E0813 00:17:17.882118 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.882172 kubelet[2757]: W0813 00:17:17.882145 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.882172 kubelet[2757]: E0813 00:17:17.882154 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.882442 kubelet[2757]: E0813 00:17:17.882422 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.882442 kubelet[2757]: W0813 00:17:17.882440 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.882518 kubelet[2757]: E0813 00:17:17.882452 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.882728 kubelet[2757]: E0813 00:17:17.882709 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.882776 kubelet[2757]: W0813 00:17:17.882724 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.882776 kubelet[2757]: E0813 00:17:17.882743 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.882946 kubelet[2757]: E0813 00:17:17.882921 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.882946 kubelet[2757]: W0813 00:17:17.882939 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.882946 kubelet[2757]: E0813 00:17:17.882947 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.883211 kubelet[2757]: E0813 00:17:17.883170 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.883211 kubelet[2757]: W0813 00:17:17.883197 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.883211 kubelet[2757]: E0813 00:17:17.883224 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.883783 kubelet[2757]: E0813 00:17:17.883761 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.883783 kubelet[2757]: W0813 00:17:17.883776 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.883850 kubelet[2757]: E0813 00:17:17.883786 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.884020 kubelet[2757]: E0813 00:17:17.883987 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.884020 kubelet[2757]: W0813 00:17:17.883999 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.884020 kubelet[2757]: E0813 00:17:17.884016 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.884270 kubelet[2757]: E0813 00:17:17.884255 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.884270 kubelet[2757]: W0813 00:17:17.884265 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.884345 kubelet[2757]: E0813 00:17:17.884273 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.884485 kubelet[2757]: E0813 00:17:17.884470 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.884485 kubelet[2757]: W0813 00:17:17.884480 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.884485 kubelet[2757]: E0813 00:17:17.884491 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.884780 kubelet[2757]: E0813 00:17:17.884749 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.884780 kubelet[2757]: W0813 00:17:17.884776 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.884867 kubelet[2757]: E0813 00:17:17.884804 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.885113 kubelet[2757]: E0813 00:17:17.885097 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.885113 kubelet[2757]: W0813 00:17:17.885108 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.885166 kubelet[2757]: E0813 00:17:17.885117 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.885347 kubelet[2757]: E0813 00:17:17.885328 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.885347 kubelet[2757]: W0813 00:17:17.885341 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.885412 kubelet[2757]: E0813 00:17:17.885352 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.885550 kubelet[2757]: E0813 00:17:17.885535 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.885550 kubelet[2757]: W0813 00:17:17.885545 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.885595 kubelet[2757]: E0813 00:17:17.885553 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.885750 kubelet[2757]: E0813 00:17:17.885735 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.885750 kubelet[2757]: W0813 00:17:17.885747 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.885804 kubelet[2757]: E0813 00:17:17.885755 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.885933 kubelet[2757]: E0813 00:17:17.885918 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.885933 kubelet[2757]: W0813 00:17:17.885928 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.885974 kubelet[2757]: E0813 00:17:17.885936 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.886104 kubelet[2757]: E0813 00:17:17.886090 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.886104 kubelet[2757]: W0813 00:17:17.886100 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.886154 kubelet[2757]: E0813 00:17:17.886107 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.886312 kubelet[2757]: E0813 00:17:17.886297 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.886312 kubelet[2757]: W0813 00:17:17.886307 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.886365 kubelet[2757]: E0813 00:17:17.886325 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.886535 kubelet[2757]: E0813 00:17:17.886521 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.886535 kubelet[2757]: W0813 00:17:17.886531 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.886583 kubelet[2757]: E0813 00:17:17.886539 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.886753 kubelet[2757]: E0813 00:17:17.886737 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.886753 kubelet[2757]: W0813 00:17:17.886747 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.886753 kubelet[2757]: E0813 00:17:17.886755 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.886972 kubelet[2757]: E0813 00:17:17.886957 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.886972 kubelet[2757]: W0813 00:17:17.886970 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.887038 kubelet[2757]: E0813 00:17:17.886980 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.887207 kubelet[2757]: E0813 00:17:17.887195 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.887207 kubelet[2757]: W0813 00:17:17.887205 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.887253 kubelet[2757]: E0813 00:17:17.887214 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.887431 kubelet[2757]: E0813 00:17:17.887419 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.887431 kubelet[2757]: W0813 00:17:17.887429 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.887487 kubelet[2757]: E0813 00:17:17.887437 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.887765 kubelet[2757]: E0813 00:17:17.887746 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.887765 kubelet[2757]: W0813 00:17:17.887760 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.887832 kubelet[2757]: E0813 00:17:17.887770 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:17.895498 kubelet[2757]: E0813 00:17:17.895470 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:17.895498 kubelet[2757]: W0813 00:17:17.895484 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:17.895498 kubelet[2757]: E0813 00:17:17.895494 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:19.134170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765484295.mount: Deactivated successfully. Aug 13 00:17:19.580796 kubelet[2757]: E0813 00:17:19.580630 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:20.014310 containerd[1579]: time="2025-08-13T00:17:20.014228160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:20.015009 containerd[1579]: time="2025-08-13T00:17:20.014977827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 00:17:20.016174 containerd[1579]: time="2025-08-13T00:17:20.016136494Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:20.018280 containerd[1579]: time="2025-08-13T00:17:20.018208506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:20.018692 containerd[1579]: time="2025-08-13T00:17:20.018637641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.319882666s" Aug 13 00:17:20.018729 containerd[1579]: time="2025-08-13T00:17:20.018693606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:17:20.019663 containerd[1579]: time="2025-08-13T00:17:20.019576144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:17:20.034367 containerd[1579]: time="2025-08-13T00:17:20.034313896Z" level=info msg="CreateContainer within sandbox \"2b199d26b7e1cd960933ec0f882ea8a9f762486b393540805c653df4f656c974\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:17:20.042879 containerd[1579]: time="2025-08-13T00:17:20.042829040Z" level=info msg="Container b3c745a72423b1c522875e02c54e620b89a5b96e1ab711e526ad3b298cae488d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:20.051959 containerd[1579]: time="2025-08-13T00:17:20.051925616Z" level=info msg="CreateContainer within sandbox \"2b199d26b7e1cd960933ec0f882ea8a9f762486b393540805c653df4f656c974\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b3c745a72423b1c522875e02c54e620b89a5b96e1ab711e526ad3b298cae488d\"" Aug 13 00:17:20.052400 containerd[1579]: time="2025-08-13T00:17:20.052379809Z" level=info msg="StartContainer for \"b3c745a72423b1c522875e02c54e620b89a5b96e1ab711e526ad3b298cae488d\"" Aug 13 00:17:20.054736 containerd[1579]: time="2025-08-13T00:17:20.054700057Z" level=info msg="connecting to shim b3c745a72423b1c522875e02c54e620b89a5b96e1ab711e526ad3b298cae488d" address="unix:///run/containerd/s/722317106342c8a58a9dc81350bcd7b7b59b8af3cd4e329da589ddb2abb84cf1" protocol=ttrpc version=3 Aug 13 00:17:20.080783 systemd[1]: Started cri-containerd-b3c745a72423b1c522875e02c54e620b89a5b96e1ab711e526ad3b298cae488d.scope - libcontainer container b3c745a72423b1c522875e02c54e620b89a5b96e1ab711e526ad3b298cae488d. Aug 13 00:17:20.132081 containerd[1579]: time="2025-08-13T00:17:20.132044391Z" level=info msg="StartContainer for \"b3c745a72423b1c522875e02c54e620b89a5b96e1ab711e526ad3b298cae488d\" returns successfully" Aug 13 00:17:20.654640 kubelet[2757]: E0813 00:17:20.654597 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:20.692864 kubelet[2757]: E0813 00:17:20.692799 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.692864 kubelet[2757]: W0813 00:17:20.692839 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.692864 kubelet[2757]: E0813 00:17:20.692873 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.693193 kubelet[2757]: E0813 00:17:20.693162 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.693193 kubelet[2757]: W0813 00:17:20.693188 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.693291 kubelet[2757]: E0813 00:17:20.693215 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.693475 kubelet[2757]: E0813 00:17:20.693448 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.693475 kubelet[2757]: W0813 00:17:20.693458 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.693475 kubelet[2757]: E0813 00:17:20.693466 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.693813 kubelet[2757]: E0813 00:17:20.693785 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.693813 kubelet[2757]: W0813 00:17:20.693795 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.693813 kubelet[2757]: E0813 00:17:20.693804 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.694031 kubelet[2757]: E0813 00:17:20.694008 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.694031 kubelet[2757]: W0813 00:17:20.694019 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.694031 kubelet[2757]: E0813 00:17:20.694028 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.694230 kubelet[2757]: E0813 00:17:20.694215 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.694230 kubelet[2757]: W0813 00:17:20.694224 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.694300 kubelet[2757]: E0813 00:17:20.694232 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.694430 kubelet[2757]: E0813 00:17:20.694413 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.694430 kubelet[2757]: W0813 00:17:20.694422 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.694430 kubelet[2757]: E0813 00:17:20.694430 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.694601 kubelet[2757]: E0813 00:17:20.694582 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.694601 kubelet[2757]: W0813 00:17:20.694591 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.694601 kubelet[2757]: E0813 00:17:20.694599 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.694803 kubelet[2757]: E0813 00:17:20.694785 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.694803 kubelet[2757]: W0813 00:17:20.694794 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.694803 kubelet[2757]: E0813 00:17:20.694802 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.695006 kubelet[2757]: E0813 00:17:20.694988 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.695006 kubelet[2757]: W0813 00:17:20.694999 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.695006 kubelet[2757]: E0813 00:17:20.695006 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.695180 kubelet[2757]: E0813 00:17:20.695164 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.695180 kubelet[2757]: W0813 00:17:20.695172 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.695180 kubelet[2757]: E0813 00:17:20.695180 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.695361 kubelet[2757]: E0813 00:17:20.695343 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.695361 kubelet[2757]: W0813 00:17:20.695352 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.695361 kubelet[2757]: E0813 00:17:20.695361 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.695551 kubelet[2757]: E0813 00:17:20.695536 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.695551 kubelet[2757]: W0813 00:17:20.695545 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.695600 kubelet[2757]: E0813 00:17:20.695553 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.695750 kubelet[2757]: E0813 00:17:20.695734 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.695750 kubelet[2757]: W0813 00:17:20.695742 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.695750 kubelet[2757]: E0813 00:17:20.695750 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.695921 kubelet[2757]: E0813 00:17:20.695903 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.695921 kubelet[2757]: W0813 00:17:20.695912 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.695921 kubelet[2757]: E0813 00:17:20.695919 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.705215 kubelet[2757]: E0813 00:17:20.705175 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.705215 kubelet[2757]: W0813 00:17:20.705209 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.705359 kubelet[2757]: E0813 00:17:20.705237 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.705540 kubelet[2757]: E0813 00:17:20.705502 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.705540 kubelet[2757]: W0813 00:17:20.705518 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.705615 kubelet[2757]: E0813 00:17:20.705563 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.705865 kubelet[2757]: E0813 00:17:20.705844 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.705865 kubelet[2757]: W0813 00:17:20.705859 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.705969 kubelet[2757]: E0813 00:17:20.705873 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.706130 kubelet[2757]: E0813 00:17:20.706110 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.706130 kubelet[2757]: W0813 00:17:20.706125 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.706220 kubelet[2757]: E0813 00:17:20.706138 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.706341 kubelet[2757]: E0813 00:17:20.706324 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.706341 kubelet[2757]: W0813 00:17:20.706334 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.706409 kubelet[2757]: E0813 00:17:20.706351 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.706537 kubelet[2757]: E0813 00:17:20.706518 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.706537 kubelet[2757]: W0813 00:17:20.706527 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.706537 kubelet[2757]: E0813 00:17:20.706535 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.706771 kubelet[2757]: E0813 00:17:20.706752 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.706771 kubelet[2757]: W0813 00:17:20.706764 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.706844 kubelet[2757]: E0813 00:17:20.706775 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.707190 kubelet[2757]: E0813 00:17:20.707158 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.707190 kubelet[2757]: W0813 00:17:20.707184 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.707293 kubelet[2757]: E0813 00:17:20.707200 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.707450 kubelet[2757]: E0813 00:17:20.707420 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.707450 kubelet[2757]: W0813 00:17:20.707433 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.707450 kubelet[2757]: E0813 00:17:20.707444 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.707720 kubelet[2757]: E0813 00:17:20.707694 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.707720 kubelet[2757]: W0813 00:17:20.707706 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.707720 kubelet[2757]: E0813 00:17:20.707718 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.707933 kubelet[2757]: E0813 00:17:20.707909 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.707933 kubelet[2757]: W0813 00:17:20.707921 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.707933 kubelet[2757]: E0813 00:17:20.707932 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.708274 kubelet[2757]: E0813 00:17:20.708240 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.708274 kubelet[2757]: W0813 00:17:20.708264 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.708359 kubelet[2757]: E0813 00:17:20.708275 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.708540 kubelet[2757]: E0813 00:17:20.708518 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.708540 kubelet[2757]: W0813 00:17:20.708531 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.708540 kubelet[2757]: E0813 00:17:20.708541 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.708928 kubelet[2757]: E0813 00:17:20.708891 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.708928 kubelet[2757]: W0813 00:17:20.708915 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.709006 kubelet[2757]: E0813 00:17:20.708932 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.709186 kubelet[2757]: E0813 00:17:20.709155 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.709186 kubelet[2757]: W0813 00:17:20.709172 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.709186 kubelet[2757]: E0813 00:17:20.709186 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.709492 kubelet[2757]: E0813 00:17:20.709471 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.709492 kubelet[2757]: W0813 00:17:20.709486 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.709564 kubelet[2757]: E0813 00:17:20.709499 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.709781 kubelet[2757]: E0813 00:17:20.709764 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.709781 kubelet[2757]: W0813 00:17:20.709776 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.709847 kubelet[2757]: E0813 00:17:20.709785 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:20.710315 kubelet[2757]: E0813 00:17:20.710282 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:20.710315 kubelet[2757]: W0813 00:17:20.710298 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:20.710315 kubelet[2757]: E0813 00:17:20.710311 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.583209 kubelet[2757]: E0813 00:17:21.583144 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:21.656128 kubelet[2757]: I0813 00:17:21.656086 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:17:21.656563 kubelet[2757]: E0813 00:17:21.656448 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:21.700678 kubelet[2757]: E0813 00:17:21.700606 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.700797 kubelet[2757]: W0813 00:17:21.700644 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.700797 kubelet[2757]: E0813 00:17:21.700726 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.700988 kubelet[2757]: E0813 00:17:21.700970 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.700988 kubelet[2757]: W0813 00:17:21.700985 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.701044 kubelet[2757]: E0813 00:17:21.700996 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.701208 kubelet[2757]: E0813 00:17:21.701192 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.701208 kubelet[2757]: W0813 00:17:21.701204 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.701272 kubelet[2757]: E0813 00:17:21.701217 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.701481 kubelet[2757]: E0813 00:17:21.701462 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.701481 kubelet[2757]: W0813 00:17:21.701476 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.701544 kubelet[2757]: E0813 00:17:21.701489 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.701738 kubelet[2757]: E0813 00:17:21.701719 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.701738 kubelet[2757]: W0813 00:17:21.701731 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.701795 kubelet[2757]: E0813 00:17:21.701744 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.701965 kubelet[2757]: E0813 00:17:21.701947 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.701965 kubelet[2757]: W0813 00:17:21.701960 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.702025 kubelet[2757]: E0813 00:17:21.701971 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.702183 kubelet[2757]: E0813 00:17:21.702167 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.702183 kubelet[2757]: W0813 00:17:21.702179 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.702242 kubelet[2757]: E0813 00:17:21.702189 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.702408 kubelet[2757]: E0813 00:17:21.702392 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.702408 kubelet[2757]: W0813 00:17:21.702405 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.702464 kubelet[2757]: E0813 00:17:21.702415 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.702627 kubelet[2757]: E0813 00:17:21.702611 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.702627 kubelet[2757]: W0813 00:17:21.702623 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.702697 kubelet[2757]: E0813 00:17:21.702634 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.702859 kubelet[2757]: E0813 00:17:21.702843 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.702859 kubelet[2757]: W0813 00:17:21.702855 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.702915 kubelet[2757]: E0813 00:17:21.702867 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.703065 kubelet[2757]: E0813 00:17:21.703049 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.703065 kubelet[2757]: W0813 00:17:21.703061 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.703122 kubelet[2757]: E0813 00:17:21.703072 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.703289 kubelet[2757]: E0813 00:17:21.703272 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.703289 kubelet[2757]: W0813 00:17:21.703285 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.703346 kubelet[2757]: E0813 00:17:21.703295 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.703525 kubelet[2757]: E0813 00:17:21.703509 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.703525 kubelet[2757]: W0813 00:17:21.703520 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.703580 kubelet[2757]: E0813 00:17:21.703531 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.703757 kubelet[2757]: E0813 00:17:21.703741 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.703757 kubelet[2757]: W0813 00:17:21.703754 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.703817 kubelet[2757]: E0813 00:17:21.703767 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.703987 kubelet[2757]: E0813 00:17:21.703971 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.703987 kubelet[2757]: W0813 00:17:21.703984 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.704043 kubelet[2757]: E0813 00:17:21.703994 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.711345 kubelet[2757]: E0813 00:17:21.711317 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.711345 kubelet[2757]: W0813 00:17:21.711336 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.711416 kubelet[2757]: E0813 00:17:21.711357 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.711621 kubelet[2757]: E0813 00:17:21.711599 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.711621 kubelet[2757]: W0813 00:17:21.711609 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.711621 kubelet[2757]: E0813 00:17:21.711617 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.711920 kubelet[2757]: E0813 00:17:21.711906 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.711920 kubelet[2757]: W0813 00:17:21.711914 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.711920 kubelet[2757]: E0813 00:17:21.711923 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.712158 kubelet[2757]: E0813 00:17:21.712134 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.712158 kubelet[2757]: W0813 00:17:21.712147 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.712210 kubelet[2757]: E0813 00:17:21.712158 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.712373 kubelet[2757]: E0813 00:17:21.712358 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.712405 kubelet[2757]: W0813 00:17:21.712369 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.712405 kubelet[2757]: E0813 00:17:21.712389 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.712565 kubelet[2757]: E0813 00:17:21.712550 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.712565 kubelet[2757]: W0813 00:17:21.712560 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.712620 kubelet[2757]: E0813 00:17:21.712567 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.712764 kubelet[2757]: E0813 00:17:21.712750 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.712764 kubelet[2757]: W0813 00:17:21.712759 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.712816 kubelet[2757]: E0813 00:17:21.712767 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.713164 kubelet[2757]: E0813 00:17:21.713133 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.713201 kubelet[2757]: W0813 00:17:21.713164 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.713236 kubelet[2757]: E0813 00:17:21.713183 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.713426 kubelet[2757]: E0813 00:17:21.713409 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.713426 kubelet[2757]: W0813 00:17:21.713422 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.713481 kubelet[2757]: E0813 00:17:21.713432 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.713715 kubelet[2757]: E0813 00:17:21.713699 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.713715 kubelet[2757]: W0813 00:17:21.713712 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.713767 kubelet[2757]: E0813 00:17:21.713723 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.713933 kubelet[2757]: E0813 00:17:21.713917 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.713933 kubelet[2757]: W0813 00:17:21.713929 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.713982 kubelet[2757]: E0813 00:17:21.713939 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.714163 kubelet[2757]: E0813 00:17:21.714147 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.714163 kubelet[2757]: W0813 00:17:21.714160 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.714219 kubelet[2757]: E0813 00:17:21.714170 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.714413 kubelet[2757]: E0813 00:17:21.714397 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.714448 kubelet[2757]: W0813 00:17:21.714426 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.714448 kubelet[2757]: E0813 00:17:21.714438 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.714707 kubelet[2757]: E0813 00:17:21.714692 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.714707 kubelet[2757]: W0813 00:17:21.714704 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.714765 kubelet[2757]: E0813 00:17:21.714714 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.714897 kubelet[2757]: E0813 00:17:21.714884 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.714897 kubelet[2757]: W0813 00:17:21.714893 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.714941 kubelet[2757]: E0813 00:17:21.714901 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.715083 kubelet[2757]: E0813 00:17:21.715071 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.715083 kubelet[2757]: W0813 00:17:21.715080 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.715127 kubelet[2757]: E0813 00:17:21.715088 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.715366 kubelet[2757]: E0813 00:17:21.715345 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.715366 kubelet[2757]: W0813 00:17:21.715357 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.715366 kubelet[2757]: E0813 00:17:21.715367 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.715557 kubelet[2757]: E0813 00:17:21.715545 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:21.715557 kubelet[2757]: W0813 00:17:21.715555 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:21.715599 kubelet[2757]: E0813 00:17:21.715563 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:21.805737 containerd[1579]: time="2025-08-13T00:17:21.805685173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:21.806620 containerd[1579]: time="2025-08-13T00:17:21.806585615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 00:17:21.808074 containerd[1579]: time="2025-08-13T00:17:21.808042902Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:21.812593 containerd[1579]: time="2025-08-13T00:17:21.812536031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:21.813172 containerd[1579]: time="2025-08-13T00:17:21.813138331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.793495441s" Aug 13 00:17:21.813216 containerd[1579]: time="2025-08-13T00:17:21.813175261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:17:21.818044 containerd[1579]: time="2025-08-13T00:17:21.817992928Z" level=info msg="CreateContainer within sandbox \"172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:17:21.828669 containerd[1579]: time="2025-08-13T00:17:21.828589289Z" level=info msg="Container 621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:21.838466 containerd[1579]: time="2025-08-13T00:17:21.838361452Z" level=info msg="CreateContainer within sandbox \"172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25\"" Aug 13 00:17:21.839080 containerd[1579]: time="2025-08-13T00:17:21.838882201Z" level=info msg="StartContainer for \"621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25\"" Aug 13 00:17:21.841705 containerd[1579]: time="2025-08-13T00:17:21.841639969Z" level=info msg="connecting to shim 621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25" address="unix:///run/containerd/s/c0d390c841f5bfff6f7ce16d3bba2cd6e8d9fc8bd5c330b52a616653f924e87b" protocol=ttrpc version=3 Aug 13 00:17:21.871824 systemd[1]: Started cri-containerd-621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25.scope - libcontainer container 621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25. Aug 13 00:17:21.917923 containerd[1579]: time="2025-08-13T00:17:21.917872159Z" level=info msg="StartContainer for \"621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25\" returns successfully" Aug 13 00:17:21.930442 systemd[1]: cri-containerd-621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25.scope: Deactivated successfully. Aug 13 00:17:21.932244 containerd[1579]: time="2025-08-13T00:17:21.932190611Z" level=info msg="received exit event container_id:\"621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25\" id:\"621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25\" pid:3478 exited_at:{seconds:1755044241 nanos:931759090}" Aug 13 00:17:21.932376 containerd[1579]: time="2025-08-13T00:17:21.932353637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25\" id:\"621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25\" pid:3478 exited_at:{seconds:1755044241 nanos:931759090}" Aug 13 00:17:21.955170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-621fdfecb69e385f4ba652d52a81255632e56b481714ffdb9eefa4b90efccc25-rootfs.mount: Deactivated successfully. Aug 13 00:17:22.663371 containerd[1579]: time="2025-08-13T00:17:22.663240101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:17:22.678787 kubelet[2757]: I0813 00:17:22.677508 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b5db85bd8-wkt4r" podStartSLOduration=3.35652895 podStartE2EDuration="5.677484751s" podCreationTimestamp="2025-08-13 00:17:17 +0000 UTC" firstStartedPulling="2025-08-13 00:17:17.698422051 +0000 UTC m=+18.222971908" lastFinishedPulling="2025-08-13 00:17:20.019377852 +0000 UTC m=+20.543927709" observedRunningTime="2025-08-13 00:17:20.665990392 +0000 UTC m=+21.190540249" watchObservedRunningTime="2025-08-13 00:17:22.677484751 +0000 UTC m=+23.202034608" Aug 13 00:17:23.580633 kubelet[2757]: E0813 00:17:23.580560 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:25.580475 kubelet[2757]: E0813 00:17:25.580394 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:26.768006 containerd[1579]: time="2025-08-13T00:17:26.767952955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:26.769317 containerd[1579]: time="2025-08-13T00:17:26.769267232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 00:17:26.770426 containerd[1579]: time="2025-08-13T00:17:26.770383518Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:26.772911 containerd[1579]: time="2025-08-13T00:17:26.772876878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:26.773603 containerd[1579]: time="2025-08-13T00:17:26.773560342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.110237556s" Aug 13 00:17:26.773603 containerd[1579]: time="2025-08-13T00:17:26.773589096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:17:26.778472 containerd[1579]: time="2025-08-13T00:17:26.778430494Z" level=info msg="CreateContainer within sandbox \"172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:17:26.789242 containerd[1579]: time="2025-08-13T00:17:26.789132495Z" level=info msg="Container f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:26.799534 containerd[1579]: time="2025-08-13T00:17:26.799489327Z" level=info msg="CreateContainer within sandbox \"172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31\"" Aug 13 00:17:26.800240 containerd[1579]: time="2025-08-13T00:17:26.800197986Z" level=info msg="StartContainer for \"f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31\"" Aug 13 00:17:26.801550 containerd[1579]: time="2025-08-13T00:17:26.801502575Z" level=info msg="connecting to shim f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31" address="unix:///run/containerd/s/c0d390c841f5bfff6f7ce16d3bba2cd6e8d9fc8bd5c330b52a616653f924e87b" protocol=ttrpc version=3 Aug 13 00:17:26.837916 systemd[1]: Started cri-containerd-f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31.scope - libcontainer container f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31. Aug 13 00:17:26.897964 containerd[1579]: time="2025-08-13T00:17:26.897913941Z" level=info msg="StartContainer for \"f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31\" returns successfully" Aug 13 00:17:27.580461 kubelet[2757]: E0813 00:17:27.580390 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:28.884204 containerd[1579]: time="2025-08-13T00:17:28.884144192Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:17:28.887542 systemd[1]: cri-containerd-f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31.scope: Deactivated successfully. Aug 13 00:17:28.887867 systemd[1]: cri-containerd-f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31.scope: Consumed 642ms CPU time, 179M memory peak, 2.7M read from disk, 171.2M written to disk. Aug 13 00:17:28.888594 containerd[1579]: time="2025-08-13T00:17:28.888459591Z" level=info msg="received exit event container_id:\"f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31\" id:\"f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31\" pid:3539 exited_at:{seconds:1755044248 nanos:888150221}" Aug 13 00:17:28.888594 containerd[1579]: time="2025-08-13T00:17:28.888557345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31\" id:\"f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31\" pid:3539 exited_at:{seconds:1755044248 nanos:888150221}" Aug 13 00:17:28.904866 kubelet[2757]: I0813 00:17:28.904800 2757 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:17:28.911567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f56d124de71bcca91eb564a3159b7628b262ce9621b26e6e2928b0bcfad84a31-rootfs.mount: Deactivated successfully. Aug 13 00:17:29.051828 systemd[1]: Created slice kubepods-burstable-pod08cdaf86_0825_4379_a4b9_e5cd5589ebda.slice - libcontainer container kubepods-burstable-pod08cdaf86_0825_4379_a4b9_e5cd5589ebda.slice. Aug 13 00:17:29.060174 kubelet[2757]: I0813 00:17:29.060124 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cdaf86-0825-4379-a4b9-e5cd5589ebda-config-volume\") pod \"coredns-674b8bbfcf-v8jth\" (UID: \"08cdaf86-0825-4379-a4b9-e5cd5589ebda\") " pod="kube-system/coredns-674b8bbfcf-v8jth" Aug 13 00:17:29.060483 kubelet[2757]: I0813 00:17:29.060182 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4x88\" (UniqueName: \"kubernetes.io/projected/08cdaf86-0825-4379-a4b9-e5cd5589ebda-kube-api-access-l4x88\") pod \"coredns-674b8bbfcf-v8jth\" (UID: \"08cdaf86-0825-4379-a4b9-e5cd5589ebda\") " pod="kube-system/coredns-674b8bbfcf-v8jth" Aug 13 00:17:29.070382 systemd[1]: Created slice kubepods-besteffort-pod6c2d4e97_fe00_43c2_a642_bfd9b285ffd6.slice - libcontainer container kubepods-besteffort-pod6c2d4e97_fe00_43c2_a642_bfd9b285ffd6.slice. Aug 13 00:17:29.079468 systemd[1]: Created slice kubepods-burstable-pod80719b13_001f_4dd0_926a_1edc3ca41591.slice - libcontainer container kubepods-burstable-pod80719b13_001f_4dd0_926a_1edc3ca41591.slice. Aug 13 00:17:29.087208 systemd[1]: Created slice kubepods-besteffort-poddecc63a8_ef98_4b0e_8f42_4be53f0c43dd.slice - libcontainer container kubepods-besteffort-poddecc63a8_ef98_4b0e_8f42_4be53f0c43dd.slice. Aug 13 00:17:29.095627 systemd[1]: Created slice kubepods-besteffort-pod2820e684_a527_4c3d_a8a0_b491fc1ad579.slice - libcontainer container kubepods-besteffort-pod2820e684_a527_4c3d_a8a0_b491fc1ad579.slice. Aug 13 00:17:29.104068 systemd[1]: Created slice kubepods-besteffort-pod5bcc2c49_47f9_4583_9324_281976f6433c.slice - libcontainer container kubepods-besteffort-pod5bcc2c49_47f9_4583_9324_281976f6433c.slice. Aug 13 00:17:29.114418 systemd[1]: Created slice kubepods-besteffort-pod6c80d32a_9cf7_4576_9c40_eec184d75b4d.slice - libcontainer container kubepods-besteffort-pod6c80d32a_9cf7_4576_9c40_eec184d75b4d.slice. Aug 13 00:17:29.162784 kubelet[2757]: I0813 00:17:29.161736 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xgb\" (UniqueName: \"kubernetes.io/projected/80719b13-001f-4dd0-926a-1edc3ca41591-kube-api-access-94xgb\") pod \"coredns-674b8bbfcf-dfrcc\" (UID: \"80719b13-001f-4dd0-926a-1edc3ca41591\") " pod="kube-system/coredns-674b8bbfcf-dfrcc" Aug 13 00:17:29.162784 kubelet[2757]: I0813 00:17:29.161813 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5bcc2c49-47f9-4583-9324-281976f6433c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-94mpd\" (UID: \"5bcc2c49-47f9-4583-9324-281976f6433c\") " pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:29.162784 kubelet[2757]: I0813 00:17:29.161842 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67qv\" (UniqueName: \"kubernetes.io/projected/5bcc2c49-47f9-4583-9324-281976f6433c-kube-api-access-z67qv\") pod \"goldmane-768f4c5c69-94mpd\" (UID: \"5bcc2c49-47f9-4583-9324-281976f6433c\") " pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:29.162784 kubelet[2757]: I0813 00:17:29.161870 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bcc2c49-47f9-4583-9324-281976f6433c-config\") pod \"goldmane-768f4c5c69-94mpd\" (UID: \"5bcc2c49-47f9-4583-9324-281976f6433c\") " pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:29.162784 kubelet[2757]: I0813 00:17:29.161894 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9q2\" (UniqueName: \"kubernetes.io/projected/6c2d4e97-fe00-43c2-a642-bfd9b285ffd6-kube-api-access-8z9q2\") pod \"calico-apiserver-84464dd98-24855\" (UID: \"6c2d4e97-fe00-43c2-a642-bfd9b285ffd6\") " pod="calico-apiserver/calico-apiserver-84464dd98-24855" Aug 13 00:17:29.163096 kubelet[2757]: I0813 00:17:29.161929 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2820e684-a527-4c3d-a8a0-b491fc1ad579-calico-apiserver-certs\") pod \"calico-apiserver-84464dd98-fq74k\" (UID: \"2820e684-a527-4c3d-a8a0-b491fc1ad579\") " pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" Aug 13 00:17:29.163096 kubelet[2757]: I0813 00:17:29.161949 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bcc2c49-47f9-4583-9324-281976f6433c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-94mpd\" (UID: \"5bcc2c49-47f9-4583-9324-281976f6433c\") " pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:29.163096 kubelet[2757]: I0813 00:17:29.161967 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-ca-bundle\") pod \"whisker-8489667c94-t6zfc\" (UID: \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\") " pod="calico-system/whisker-8489667c94-t6zfc" Aug 13 00:17:29.163096 kubelet[2757]: I0813 00:17:29.162005 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7blw\" (UniqueName: \"kubernetes.io/projected/2820e684-a527-4c3d-a8a0-b491fc1ad579-kube-api-access-f7blw\") pod \"calico-apiserver-84464dd98-fq74k\" (UID: \"2820e684-a527-4c3d-a8a0-b491fc1ad579\") " pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" Aug 13 00:17:29.163096 kubelet[2757]: I0813 00:17:29.162027 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-backend-key-pair\") pod \"whisker-8489667c94-t6zfc\" (UID: \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\") " pod="calico-system/whisker-8489667c94-t6zfc" Aug 13 00:17:29.163277 kubelet[2757]: I0813 00:17:29.162057 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/decc63a8-ef98-4b0e-8f42-4be53f0c43dd-tigera-ca-bundle\") pod \"calico-kube-controllers-b795fc7dc-4z8w4\" (UID: \"decc63a8-ef98-4b0e-8f42-4be53f0c43dd\") " pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" Aug 13 00:17:29.163277 kubelet[2757]: I0813 00:17:29.162076 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80719b13-001f-4dd0-926a-1edc3ca41591-config-volume\") pod \"coredns-674b8bbfcf-dfrcc\" (UID: \"80719b13-001f-4dd0-926a-1edc3ca41591\") " pod="kube-system/coredns-674b8bbfcf-dfrcc" Aug 13 00:17:29.163277 kubelet[2757]: I0813 00:17:29.162096 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c2d4e97-fe00-43c2-a642-bfd9b285ffd6-calico-apiserver-certs\") pod \"calico-apiserver-84464dd98-24855\" (UID: \"6c2d4e97-fe00-43c2-a642-bfd9b285ffd6\") " pod="calico-apiserver/calico-apiserver-84464dd98-24855" Aug 13 00:17:29.163277 kubelet[2757]: I0813 00:17:29.162115 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xhzw\" (UniqueName: \"kubernetes.io/projected/6c80d32a-9cf7-4576-9c40-eec184d75b4d-kube-api-access-2xhzw\") pod \"whisker-8489667c94-t6zfc\" (UID: \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\") " pod="calico-system/whisker-8489667c94-t6zfc" Aug 13 00:17:29.163277 kubelet[2757]: I0813 00:17:29.162134 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96ck\" (UniqueName: \"kubernetes.io/projected/decc63a8-ef98-4b0e-8f42-4be53f0c43dd-kube-api-access-q96ck\") pod \"calico-kube-controllers-b795fc7dc-4z8w4\" (UID: \"decc63a8-ef98-4b0e-8f42-4be53f0c43dd\") " pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" Aug 13 00:17:29.358142 kubelet[2757]: E0813 00:17:29.358066 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:29.358874 containerd[1579]: time="2025-08-13T00:17:29.358813242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8jth,Uid:08cdaf86-0825-4379-a4b9-e5cd5589ebda,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:29.375902 containerd[1579]: time="2025-08-13T00:17:29.375861609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-24855,Uid:6c2d4e97-fe00-43c2-a642-bfd9b285ffd6,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:29.384354 kubelet[2757]: E0813 00:17:29.384304 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:29.385195 containerd[1579]: time="2025-08-13T00:17:29.385151795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dfrcc,Uid:80719b13-001f-4dd0-926a-1edc3ca41591,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:29.392978 containerd[1579]: time="2025-08-13T00:17:29.392947596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b795fc7dc-4z8w4,Uid:decc63a8-ef98-4b0e-8f42-4be53f0c43dd,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:29.399688 containerd[1579]: time="2025-08-13T00:17:29.399606734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-fq74k,Uid:2820e684-a527-4c3d-a8a0-b491fc1ad579,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:29.411440 containerd[1579]: time="2025-08-13T00:17:29.411390661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-94mpd,Uid:5bcc2c49-47f9-4583-9324-281976f6433c,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:29.419349 containerd[1579]: time="2025-08-13T00:17:29.419219063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8489667c94-t6zfc,Uid:6c80d32a-9cf7-4576-9c40-eec184d75b4d,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:29.428671 containerd[1579]: time="2025-08-13T00:17:29.428615068Z" level=error msg="Failed to destroy network for sandbox \"5e2e99601790304febd8bf86d33d4816c8f3f9278088a282c5471cf05a0918e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.588167 systemd[1]: Created slice kubepods-besteffort-pod3471d1d5_f5a3_4dec_8445_1557d86e0087.slice - libcontainer container kubepods-besteffort-pod3471d1d5_f5a3_4dec_8445_1557d86e0087.slice. Aug 13 00:17:29.590678 containerd[1579]: time="2025-08-13T00:17:29.590618994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkrpf,Uid:3471d1d5-f5a3-4dec-8445-1557d86e0087,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:29.680845 containerd[1579]: time="2025-08-13T00:17:29.680496238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:17:29.807298 containerd[1579]: time="2025-08-13T00:17:29.807211001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8jth,Uid:08cdaf86-0825-4379-a4b9-e5cd5589ebda,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e99601790304febd8bf86d33d4816c8f3f9278088a282c5471cf05a0918e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.807543 kubelet[2757]: E0813 00:17:29.807488 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e99601790304febd8bf86d33d4816c8f3f9278088a282c5471cf05a0918e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.807596 kubelet[2757]: E0813 00:17:29.807565 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e99601790304febd8bf86d33d4816c8f3f9278088a282c5471cf05a0918e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v8jth" Aug 13 00:17:29.807596 kubelet[2757]: E0813 00:17:29.807592 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e2e99601790304febd8bf86d33d4816c8f3f9278088a282c5471cf05a0918e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v8jth" Aug 13 00:17:29.807745 kubelet[2757]: E0813 00:17:29.807645 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v8jth_kube-system(08cdaf86-0825-4379-a4b9-e5cd5589ebda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v8jth_kube-system(08cdaf86-0825-4379-a4b9-e5cd5589ebda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e2e99601790304febd8bf86d33d4816c8f3f9278088a282c5471cf05a0918e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v8jth" podUID="08cdaf86-0825-4379-a4b9-e5cd5589ebda" Aug 13 00:17:29.930580 containerd[1579]: time="2025-08-13T00:17:29.930466029Z" level=error msg="Failed to destroy network for sandbox \"0a5e56456055c5e486bae0d038d6f686346792cc88274493602976e7b662d515\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.933377 containerd[1579]: time="2025-08-13T00:17:29.933206374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-24855,Uid:6c2d4e97-fe00-43c2-a642-bfd9b285ffd6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5e56456055c5e486bae0d038d6f686346792cc88274493602976e7b662d515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.934341 systemd[1]: run-netns-cni\x2d4a2d3510\x2d7d01\x2dd50a\x2da2d0\x2dd22dc6ee9fe0.mount: Deactivated successfully. Aug 13 00:17:29.936100 kubelet[2757]: E0813 00:17:29.935829 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5e56456055c5e486bae0d038d6f686346792cc88274493602976e7b662d515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.936503 kubelet[2757]: E0813 00:17:29.936473 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5e56456055c5e486bae0d038d6f686346792cc88274493602976e7b662d515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-24855" Aug 13 00:17:29.936544 kubelet[2757]: E0813 00:17:29.936510 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a5e56456055c5e486bae0d038d6f686346792cc88274493602976e7b662d515\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-24855" Aug 13 00:17:29.936678 kubelet[2757]: E0813 00:17:29.936575 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84464dd98-24855_calico-apiserver(6c2d4e97-fe00-43c2-a642-bfd9b285ffd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84464dd98-24855_calico-apiserver(6c2d4e97-fe00-43c2-a642-bfd9b285ffd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a5e56456055c5e486bae0d038d6f686346792cc88274493602976e7b662d515\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84464dd98-24855" podUID="6c2d4e97-fe00-43c2-a642-bfd9b285ffd6" Aug 13 00:17:29.953766 containerd[1579]: time="2025-08-13T00:17:29.953135186Z" level=error msg="Failed to destroy network for sandbox \"d4b61c3a3ef5385377b214303cefadb7b0e9616730600af41def76757cbbb124\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.953928 containerd[1579]: time="2025-08-13T00:17:29.953820893Z" level=error msg="Failed to destroy network for sandbox \"f36d99e1d444bbbb9759ac007724d02ad9692d8b2dddb87fa1be7f2d62273e68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.955827 systemd[1]: run-netns-cni\x2d73bec6ab\x2d4db5\x2d01d4\x2d43cb\x2d14468fc6171a.mount: Deactivated successfully. Aug 13 00:17:29.958596 containerd[1579]: time="2025-08-13T00:17:29.958547234Z" level=error msg="Failed to destroy network for sandbox \"0cfb54265eef00e268d018012b5546d0511e396973db121984e1377020df9533\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.961369 systemd[1]: run-netns-cni\x2d5a15fce9\x2da7a8\x2d814c\x2db4f9\x2df213892638ee.mount: Deactivated successfully. Aug 13 00:17:29.963945 containerd[1579]: time="2025-08-13T00:17:29.963802447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dfrcc,Uid:80719b13-001f-4dd0-926a-1edc3ca41591,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b61c3a3ef5385377b214303cefadb7b0e9616730600af41def76757cbbb124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.964416 kubelet[2757]: E0813 00:17:29.964375 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b61c3a3ef5385377b214303cefadb7b0e9616730600af41def76757cbbb124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.964564 kubelet[2757]: E0813 00:17:29.964547 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b61c3a3ef5385377b214303cefadb7b0e9616730600af41def76757cbbb124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dfrcc" Aug 13 00:17:29.964667 kubelet[2757]: E0813 00:17:29.964637 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4b61c3a3ef5385377b214303cefadb7b0e9616730600af41def76757cbbb124\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dfrcc" Aug 13 00:17:29.964803 kubelet[2757]: E0813 00:17:29.964766 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dfrcc_kube-system(80719b13-001f-4dd0-926a-1edc3ca41591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dfrcc_kube-system(80719b13-001f-4dd0-926a-1edc3ca41591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4b61c3a3ef5385377b214303cefadb7b0e9616730600af41def76757cbbb124\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dfrcc" podUID="80719b13-001f-4dd0-926a-1edc3ca41591" Aug 13 00:17:29.966490 containerd[1579]: time="2025-08-13T00:17:29.966419930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkrpf,Uid:3471d1d5-f5a3-4dec-8445-1557d86e0087,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cfb54265eef00e268d018012b5546d0511e396973db121984e1377020df9533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.967187 kubelet[2757]: E0813 00:17:29.967145 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cfb54265eef00e268d018012b5546d0511e396973db121984e1377020df9533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.967339 kubelet[2757]: E0813 00:17:29.967315 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cfb54265eef00e268d018012b5546d0511e396973db121984e1377020df9533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:29.967517 kubelet[2757]: E0813 00:17:29.967414 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cfb54265eef00e268d018012b5546d0511e396973db121984e1377020df9533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:29.967517 kubelet[2757]: E0813 00:17:29.967474 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wkrpf_calico-system(3471d1d5-f5a3-4dec-8445-1557d86e0087)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wkrpf_calico-system(3471d1d5-f5a3-4dec-8445-1557d86e0087)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cfb54265eef00e268d018012b5546d0511e396973db121984e1377020df9533\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:29.967873 containerd[1579]: time="2025-08-13T00:17:29.967812874Z" level=error msg="Failed to destroy network for sandbox \"80c14c798b32148a8d19d5a4f3ae33bebf62c53a58b036ce47574e7c9b01a296\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.971485 systemd[1]: run-netns-cni\x2d65650f86\x2d69c2\x2d10a7\x2de175\x2d23ec1095257a.mount: Deactivated successfully. Aug 13 00:17:29.972498 containerd[1579]: time="2025-08-13T00:17:29.972388612Z" level=error msg="Failed to destroy network for sandbox \"ce23a08f0340366e4207a26a21ad34f0aca6d007a6dfd5cdafcbc8ab5d46c77b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.974180 containerd[1579]: time="2025-08-13T00:17:29.974098371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-94mpd,Uid:5bcc2c49-47f9-4583-9324-281976f6433c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c14c798b32148a8d19d5a4f3ae33bebf62c53a58b036ce47574e7c9b01a296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.974527 kubelet[2757]: E0813 00:17:29.974484 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c14c798b32148a8d19d5a4f3ae33bebf62c53a58b036ce47574e7c9b01a296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.974600 kubelet[2757]: E0813 00:17:29.974558 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c14c798b32148a8d19d5a4f3ae33bebf62c53a58b036ce47574e7c9b01a296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:29.974627 kubelet[2757]: E0813 00:17:29.974599 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80c14c798b32148a8d19d5a4f3ae33bebf62c53a58b036ce47574e7c9b01a296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:29.974852 kubelet[2757]: E0813 00:17:29.974719 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-94mpd_calico-system(5bcc2c49-47f9-4583-9324-281976f6433c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-94mpd_calico-system(5bcc2c49-47f9-4583-9324-281976f6433c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80c14c798b32148a8d19d5a4f3ae33bebf62c53a58b036ce47574e7c9b01a296\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-94mpd" podUID="5bcc2c49-47f9-4583-9324-281976f6433c" Aug 13 00:17:29.975355 containerd[1579]: time="2025-08-13T00:17:29.975315164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8489667c94-t6zfc,Uid:6c80d32a-9cf7-4576-9c40-eec184d75b4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce23a08f0340366e4207a26a21ad34f0aca6d007a6dfd5cdafcbc8ab5d46c77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.975794 kubelet[2757]: E0813 00:17:29.975742 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce23a08f0340366e4207a26a21ad34f0aca6d007a6dfd5cdafcbc8ab5d46c77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.975794 kubelet[2757]: E0813 00:17:29.975807 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce23a08f0340366e4207a26a21ad34f0aca6d007a6dfd5cdafcbc8ab5d46c77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8489667c94-t6zfc" Aug 13 00:17:29.975794 kubelet[2757]: E0813 00:17:29.975834 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce23a08f0340366e4207a26a21ad34f0aca6d007a6dfd5cdafcbc8ab5d46c77b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8489667c94-t6zfc" Aug 13 00:17:29.976975 kubelet[2757]: E0813 00:17:29.976926 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8489667c94-t6zfc_calico-system(6c80d32a-9cf7-4576-9c40-eec184d75b4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8489667c94-t6zfc_calico-system(6c80d32a-9cf7-4576-9c40-eec184d75b4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce23a08f0340366e4207a26a21ad34f0aca6d007a6dfd5cdafcbc8ab5d46c77b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8489667c94-t6zfc" podUID="6c80d32a-9cf7-4576-9c40-eec184d75b4d" Aug 13 00:17:29.992076 containerd[1579]: time="2025-08-13T00:17:29.991999038Z" level=error msg="Failed to destroy network for sandbox \"a531bcc55766638c41985408660bd5c92056f5f0f6fbd086e3932e16680296d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.993495 containerd[1579]: time="2025-08-13T00:17:29.993445522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-fq74k,Uid:2820e684-a527-4c3d-a8a0-b491fc1ad579,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a531bcc55766638c41985408660bd5c92056f5f0f6fbd086e3932e16680296d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.993870 kubelet[2757]: E0813 00:17:29.993799 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a531bcc55766638c41985408660bd5c92056f5f0f6fbd086e3932e16680296d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:29.993870 kubelet[2757]: E0813 00:17:29.993870 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a531bcc55766638c41985408660bd5c92056f5f0f6fbd086e3932e16680296d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" Aug 13 00:17:29.994107 kubelet[2757]: E0813 00:17:29.993893 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a531bcc55766638c41985408660bd5c92056f5f0f6fbd086e3932e16680296d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" Aug 13 00:17:29.994107 kubelet[2757]: E0813 00:17:29.993944 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84464dd98-fq74k_calico-apiserver(2820e684-a527-4c3d-a8a0-b491fc1ad579)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84464dd98-fq74k_calico-apiserver(2820e684-a527-4c3d-a8a0-b491fc1ad579)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a531bcc55766638c41985408660bd5c92056f5f0f6fbd086e3932e16680296d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" podUID="2820e684-a527-4c3d-a8a0-b491fc1ad579" Aug 13 00:17:30.045060 containerd[1579]: time="2025-08-13T00:17:30.044960262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b795fc7dc-4z8w4,Uid:decc63a8-ef98-4b0e-8f42-4be53f0c43dd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36d99e1d444bbbb9759ac007724d02ad9692d8b2dddb87fa1be7f2d62273e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:30.045389 kubelet[2757]: E0813 00:17:30.045325 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36d99e1d444bbbb9759ac007724d02ad9692d8b2dddb87fa1be7f2d62273e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:30.045447 kubelet[2757]: E0813 00:17:30.045422 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36d99e1d444bbbb9759ac007724d02ad9692d8b2dddb87fa1be7f2d62273e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" Aug 13 00:17:30.045486 kubelet[2757]: E0813 00:17:30.045461 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36d99e1d444bbbb9759ac007724d02ad9692d8b2dddb87fa1be7f2d62273e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" Aug 13 00:17:30.045581 kubelet[2757]: E0813 00:17:30.045547 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b795fc7dc-4z8w4_calico-system(decc63a8-ef98-4b0e-8f42-4be53f0c43dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b795fc7dc-4z8w4_calico-system(decc63a8-ef98-4b0e-8f42-4be53f0c43dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f36d99e1d444bbbb9759ac007724d02ad9692d8b2dddb87fa1be7f2d62273e68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" podUID="decc63a8-ef98-4b0e-8f42-4be53f0c43dd" Aug 13 00:17:30.912498 systemd[1]: run-netns-cni\x2df24c0a51\x2db964\x2d971c\x2d55bf\x2d941f9585f9c1.mount: Deactivated successfully. Aug 13 00:17:30.912636 systemd[1]: run-netns-cni\x2dcd7c34f3\x2d369f\x2d1666\x2d2d09\x2d2af4f85340c1.mount: Deactivated successfully. Aug 13 00:17:30.912762 systemd[1]: run-netns-cni\x2db93229a3\x2db243\x2dc8fc\x2d94fc\x2d36a18b80919c.mount: Deactivated successfully. Aug 13 00:17:39.612361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126900669.mount: Deactivated successfully. Aug 13 00:17:41.581841 kubelet[2757]: E0813 00:17:41.581717 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:41.582394 containerd[1579]: time="2025-08-13T00:17:41.581848707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkrpf,Uid:3471d1d5-f5a3-4dec-8445-1557d86e0087,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:41.582394 containerd[1579]: time="2025-08-13T00:17:41.581904282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-24855,Uid:6c2d4e97-fe00-43c2-a642-bfd9b285ffd6,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:41.582394 containerd[1579]: time="2025-08-13T00:17:41.582030829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-fq74k,Uid:2820e684-a527-4c3d-a8a0-b491fc1ad579,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:41.582394 containerd[1579]: time="2025-08-13T00:17:41.582368712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dfrcc,Uid:80719b13-001f-4dd0-926a-1edc3ca41591,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:42.368195 kubelet[2757]: I0813 00:17:42.368123 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:17:42.368529 kubelet[2757]: E0813 00:17:42.368500 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:42.581294 containerd[1579]: time="2025-08-13T00:17:42.581239999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-94mpd,Uid:5bcc2c49-47f9-4583-9324-281976f6433c,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:42.709157 kubelet[2757]: E0813 00:17:42.709039 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:42.865396 containerd[1579]: time="2025-08-13T00:17:42.865324423Z" level=error msg="Failed to destroy network for sandbox \"27016037c6555e91770e18aad5a69cd9e16d3ba880777acf89139eb48c851eee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:42.867882 systemd[1]: run-netns-cni\x2d4b666f0b\x2dd927\x2d410e\x2d9e5a\x2d5e294247d810.mount: Deactivated successfully. Aug 13 00:17:43.001445 containerd[1579]: time="2025-08-13T00:17:43.001287111Z" level=error msg="Failed to destroy network for sandbox \"1c68d1c599c944acb979abdf5af2fcbe67e2d56d5d06ecc95595a19c0ca5ed1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.273488 containerd[1579]: time="2025-08-13T00:17:43.273339101Z" level=error msg="Failed to destroy network for sandbox \"41a98014a03077407a386f5817b9406f66636000404ee0a91324993634165e27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.384335 containerd[1579]: time="2025-08-13T00:17:43.384273164Z" level=error msg="Failed to destroy network for sandbox \"3ec998311664d988f9202ad57806ba10d73479c03b0702c49364ba3eda72b7c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.388001 systemd[1]: run-netns-cni\x2d23e1a386\x2da9e2\x2d5f6f\x2df6ac\x2dfa9b5706fee5.mount: Deactivated successfully. Aug 13 00:17:43.388108 systemd[1]: run-netns-cni\x2d0a97e21f\x2d719f\x2d596f\x2d8d8b\x2da7383978857c.mount: Deactivated successfully. Aug 13 00:17:43.388177 systemd[1]: run-netns-cni\x2d1b3b7e3e\x2d524a\x2d6fc8\x2dcca4\x2de351dcac9c6e.mount: Deactivated successfully. Aug 13 00:17:43.436278 containerd[1579]: time="2025-08-13T00:17:43.436227397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:43.445734 containerd[1579]: time="2025-08-13T00:17:43.445612371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkrpf,Uid:3471d1d5-f5a3-4dec-8445-1557d86e0087,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27016037c6555e91770e18aad5a69cd9e16d3ba880777acf89139eb48c851eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.453898 kubelet[2757]: E0813 00:17:43.453833 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27016037c6555e91770e18aad5a69cd9e16d3ba880777acf89139eb48c851eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.453980 kubelet[2757]: E0813 00:17:43.453918 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27016037c6555e91770e18aad5a69cd9e16d3ba880777acf89139eb48c851eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:43.453980 kubelet[2757]: E0813 00:17:43.453953 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27016037c6555e91770e18aad5a69cd9e16d3ba880777acf89139eb48c851eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wkrpf" Aug 13 00:17:43.454058 kubelet[2757]: E0813 00:17:43.454023 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wkrpf_calico-system(3471d1d5-f5a3-4dec-8445-1557d86e0087)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wkrpf_calico-system(3471d1d5-f5a3-4dec-8445-1557d86e0087)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27016037c6555e91770e18aad5a69cd9e16d3ba880777acf89139eb48c851eee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wkrpf" podUID="3471d1d5-f5a3-4dec-8445-1557d86e0087" Aug 13 00:17:43.581527 containerd[1579]: time="2025-08-13T00:17:43.581238180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b795fc7dc-4z8w4,Uid:decc63a8-ef98-4b0e-8f42-4be53f0c43dd,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:43.581948 containerd[1579]: time="2025-08-13T00:17:43.581827005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-fq74k,Uid:2820e684-a527-4c3d-a8a0-b491fc1ad579,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68d1c599c944acb979abdf5af2fcbe67e2d56d5d06ecc95595a19c0ca5ed1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.582364 kubelet[2757]: E0813 00:17:43.582184 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68d1c599c944acb979abdf5af2fcbe67e2d56d5d06ecc95595a19c0ca5ed1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.582364 kubelet[2757]: E0813 00:17:43.582263 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68d1c599c944acb979abdf5af2fcbe67e2d56d5d06ecc95595a19c0ca5ed1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" Aug 13 00:17:43.582364 kubelet[2757]: E0813 00:17:43.582287 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68d1c599c944acb979abdf5af2fcbe67e2d56d5d06ecc95595a19c0ca5ed1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" Aug 13 00:17:43.582567 kubelet[2757]: E0813 00:17:43.582487 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84464dd98-fq74k_calico-apiserver(2820e684-a527-4c3d-a8a0-b491fc1ad579)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84464dd98-fq74k_calico-apiserver(2820e684-a527-4c3d-a8a0-b491fc1ad579)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c68d1c599c944acb979abdf5af2fcbe67e2d56d5d06ecc95595a19c0ca5ed1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" podUID="2820e684-a527-4c3d-a8a0-b491fc1ad579" Aug 13 00:17:43.728358 containerd[1579]: time="2025-08-13T00:17:43.728302556Z" level=error msg="Failed to destroy network for sandbox \"420e7dce549d53b8e66525a9fae161c7dd92c42d3a1cb89e006a24ae2398aef8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.731255 systemd[1]: run-netns-cni\x2d1984167a\x2d41fe\x2d4092\x2d1fb7\x2da04092824de4.mount: Deactivated successfully. Aug 13 00:17:43.751443 containerd[1579]: time="2025-08-13T00:17:43.751378039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-24855,Uid:6c2d4e97-fe00-43c2-a642-bfd9b285ffd6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a98014a03077407a386f5817b9406f66636000404ee0a91324993634165e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.751717 kubelet[2757]: E0813 00:17:43.751638 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a98014a03077407a386f5817b9406f66636000404ee0a91324993634165e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.752090 kubelet[2757]: E0813 00:17:43.751758 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a98014a03077407a386f5817b9406f66636000404ee0a91324993634165e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-24855" Aug 13 00:17:43.752090 kubelet[2757]: E0813 00:17:43.751790 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a98014a03077407a386f5817b9406f66636000404ee0a91324993634165e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84464dd98-24855" Aug 13 00:17:43.752090 kubelet[2757]: E0813 00:17:43.751875 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84464dd98-24855_calico-apiserver(6c2d4e97-fe00-43c2-a642-bfd9b285ffd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84464dd98-24855_calico-apiserver(6c2d4e97-fe00-43c2-a642-bfd9b285ffd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41a98014a03077407a386f5817b9406f66636000404ee0a91324993634165e27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84464dd98-24855" podUID="6c2d4e97-fe00-43c2-a642-bfd9b285ffd6" Aug 13 00:17:43.755852 containerd[1579]: time="2025-08-13T00:17:43.755796508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dfrcc,Uid:80719b13-001f-4dd0-926a-1edc3ca41591,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec998311664d988f9202ad57806ba10d73479c03b0702c49364ba3eda72b7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.756068 kubelet[2757]: E0813 00:17:43.756015 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec998311664d988f9202ad57806ba10d73479c03b0702c49364ba3eda72b7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.756120 kubelet[2757]: E0813 00:17:43.756094 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec998311664d988f9202ad57806ba10d73479c03b0702c49364ba3eda72b7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dfrcc" Aug 13 00:17:43.756157 kubelet[2757]: E0813 00:17:43.756119 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec998311664d988f9202ad57806ba10d73479c03b0702c49364ba3eda72b7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dfrcc" Aug 13 00:17:43.756211 kubelet[2757]: E0813 00:17:43.756180 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dfrcc_kube-system(80719b13-001f-4dd0-926a-1edc3ca41591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dfrcc_kube-system(80719b13-001f-4dd0-926a-1edc3ca41591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ec998311664d988f9202ad57806ba10d73479c03b0702c49364ba3eda72b7c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dfrcc" podUID="80719b13-001f-4dd0-926a-1edc3ca41591" Aug 13 00:17:43.759130 containerd[1579]: time="2025-08-13T00:17:43.759001880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:17:43.766724 containerd[1579]: time="2025-08-13T00:17:43.764360952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-94mpd,Uid:5bcc2c49-47f9-4583-9324-281976f6433c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"420e7dce549d53b8e66525a9fae161c7dd92c42d3a1cb89e006a24ae2398aef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.766891 kubelet[2757]: E0813 00:17:43.764611 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420e7dce549d53b8e66525a9fae161c7dd92c42d3a1cb89e006a24ae2398aef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.766891 kubelet[2757]: E0813 00:17:43.764729 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420e7dce549d53b8e66525a9fae161c7dd92c42d3a1cb89e006a24ae2398aef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:43.766891 kubelet[2757]: E0813 00:17:43.764758 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420e7dce549d53b8e66525a9fae161c7dd92c42d3a1cb89e006a24ae2398aef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-94mpd" Aug 13 00:17:43.767004 kubelet[2757]: E0813 00:17:43.764821 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-94mpd_calico-system(5bcc2c49-47f9-4583-9324-281976f6433c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-94mpd_calico-system(5bcc2c49-47f9-4583-9324-281976f6433c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"420e7dce549d53b8e66525a9fae161c7dd92c42d3a1cb89e006a24ae2398aef8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-94mpd" podUID="5bcc2c49-47f9-4583-9324-281976f6433c" Aug 13 00:17:43.771589 containerd[1579]: time="2025-08-13T00:17:43.770771387Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:43.809693 containerd[1579]: time="2025-08-13T00:17:43.809601692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:43.810556 containerd[1579]: time="2025-08-13T00:17:43.810509655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 14.129969174s" Aug 13 00:17:43.810627 containerd[1579]: time="2025-08-13T00:17:43.810549860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:17:43.847428 containerd[1579]: time="2025-08-13T00:17:43.847391306Z" level=info msg="CreateContainer within sandbox \"172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:17:43.847851 containerd[1579]: time="2025-08-13T00:17:43.847810832Z" level=error msg="Failed to destroy network for sandbox \"2d79102785b0fa52dc6c024e5340959fcc8cf9e74b4db209f57ea97fb13fdf2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.850327 containerd[1579]: time="2025-08-13T00:17:43.850295894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b795fc7dc-4z8w4,Uid:decc63a8-ef98-4b0e-8f42-4be53f0c43dd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d79102785b0fa52dc6c024e5340959fcc8cf9e74b4db209f57ea97fb13fdf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.850524 kubelet[2757]: E0813 00:17:43.850482 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d79102785b0fa52dc6c024e5340959fcc8cf9e74b4db209f57ea97fb13fdf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:43.850593 kubelet[2757]: E0813 00:17:43.850554 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d79102785b0fa52dc6c024e5340959fcc8cf9e74b4db209f57ea97fb13fdf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" Aug 13 00:17:43.850593 kubelet[2757]: E0813 00:17:43.850584 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d79102785b0fa52dc6c024e5340959fcc8cf9e74b4db209f57ea97fb13fdf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" Aug 13 00:17:43.850715 kubelet[2757]: E0813 00:17:43.850677 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b795fc7dc-4z8w4_calico-system(decc63a8-ef98-4b0e-8f42-4be53f0c43dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b795fc7dc-4z8w4_calico-system(decc63a8-ef98-4b0e-8f42-4be53f0c43dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d79102785b0fa52dc6c024e5340959fcc8cf9e74b4db209f57ea97fb13fdf2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" podUID="decc63a8-ef98-4b0e-8f42-4be53f0c43dd" Aug 13 00:17:43.864122 containerd[1579]: time="2025-08-13T00:17:43.864063608Z" level=info msg="Container e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:43.877342 containerd[1579]: time="2025-08-13T00:17:43.877290028Z" level=info msg="CreateContainer within sandbox \"172dc71b33ca06dd2905b8d379d72da2aa6029099ae5caaa3369408787f6e506\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27\"" Aug 13 00:17:43.877845 containerd[1579]: time="2025-08-13T00:17:43.877812417Z" level=info msg="StartContainer for \"e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27\"" Aug 13 00:17:43.879144 containerd[1579]: time="2025-08-13T00:17:43.879117165Z" level=info msg="connecting to shim e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27" address="unix:///run/containerd/s/c0d390c841f5bfff6f7ce16d3bba2cd6e8d9fc8bd5c330b52a616653f924e87b" protocol=ttrpc version=3 Aug 13 00:17:43.911849 systemd[1]: Started cri-containerd-e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27.scope - libcontainer container e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27. Aug 13 00:17:43.973860 containerd[1579]: time="2025-08-13T00:17:43.973815084Z" level=info msg="StartContainer for \"e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27\" returns successfully" Aug 13 00:17:44.078350 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:17:44.079165 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:17:44.388509 systemd[1]: run-netns-cni\x2db1b32f3f\x2d1af7\x2dc2b7\x2d2834\x2d848e7b4d58cd.mount: Deactivated successfully. Aug 13 00:17:44.580764 kubelet[2757]: E0813 00:17:44.580677 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:44.581622 containerd[1579]: time="2025-08-13T00:17:44.581250380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8jth,Uid:08cdaf86-0825-4379-a4b9-e5cd5589ebda,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:44.581622 containerd[1579]: time="2025-08-13T00:17:44.581372069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8489667c94-t6zfc,Uid:6c80d32a-9cf7-4576-9c40-eec184d75b4d,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:44.823942 containerd[1579]: time="2025-08-13T00:17:44.823781054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27\" id:\"c833b7332d19b2ea5cac61fec0a7d92ac059ea4419c43d089d884e2542d727e0\" pid:4110 exit_status:1 exited_at:{seconds:1755044264 nanos:823397705}" Aug 13 00:17:44.984998 kubelet[2757]: I0813 00:17:44.984887 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-km9sc" podStartSLOduration=2.00742597 podStartE2EDuration="27.984847485s" podCreationTimestamp="2025-08-13 00:17:17 +0000 UTC" firstStartedPulling="2025-08-13 00:17:17.834757853 +0000 UTC m=+18.359307710" lastFinishedPulling="2025-08-13 00:17:43.812179368 +0000 UTC m=+44.336729225" observedRunningTime="2025-08-13 00:17:44.983748795 +0000 UTC m=+45.508298672" watchObservedRunningTime="2025-08-13 00:17:44.984847485 +0000 UTC m=+45.509397342" Aug 13 00:17:45.808959 containerd[1579]: time="2025-08-13T00:17:45.808747620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27\" id:\"2e6729784b296367edd8e74ed4cd81011769086cb0900f453099c55e8167e7cf\" pid:4190 exit_status:1 exited_at:{seconds:1755044265 nanos:808349671}" Aug 13 00:17:46.362034 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:59078.service - OpenSSH per-connection server daemon (10.0.0.1:59078). Aug 13 00:17:46.461137 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 59078 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:17:46.462962 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:17:46.469891 systemd-logind[1557]: New session 8 of user core. Aug 13 00:17:46.474832 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:17:46.644936 systemd-networkd[1496]: cali8f492bdc3c0: Link UP Aug 13 00:17:46.646063 systemd-networkd[1496]: cali8f492bdc3c0: Gained carrier Aug 13 00:17:46.668792 sshd[4217]: Connection closed by 10.0.0.1 port 59078 Aug 13 00:17:46.669148 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:46.673575 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:59078.service: Deactivated successfully. Aug 13 00:17:46.675685 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:17:46.676583 systemd-logind[1557]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:17:46.678084 systemd-logind[1557]: Removed session 8. Aug 13 00:17:46.803807 containerd[1579]: 2025-08-13 00:17:45.242 [INFO][4134] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:17:46.803807 containerd[1579]: 2025-08-13 00:17:45.303 [INFO][4134] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8489667c94--t6zfc-eth0 whisker-8489667c94- calico-system 6c80d32a-9cf7-4576-9c40-eec184d75b4d 916 0 2025-08-13 00:17:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8489667c94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8489667c94-t6zfc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8f492bdc3c0 [] [] }} ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-" Aug 13 00:17:46.803807 containerd[1579]: 2025-08-13 00:17:45.304 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:46.803807 containerd[1579]: 2025-08-13 00:17:45.408 [INFO][4164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.410 [INFO][4164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8489667c94-t6zfc", "timestamp":"2025-08-13 00:17:45.408705271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.411 [INFO][4164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.411 [INFO][4164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.411 [INFO][4164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.490 [INFO][4164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.730 [INFO][4164] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.964 [INFO][4164] ipam/ipam.go 543: Ran out of existing affine blocks for host host="localhost" Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.975 [INFO][4164] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.977 [INFO][4164] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Aug 13 00:17:46.804116 containerd[1579]: 2025-08-13 00:17:45.977 [INFO][4164] ipam/ipam.go 572: Found unclaimed block host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:45.977 [INFO][4164] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:45.985 [INFO][4164] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:45.988 [INFO][4164] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:45.988 [INFO][4164] ipam/ipam_block_reader_writer.go 198: Marking existing affinity with current state pending as pending host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:45.990 [INFO][4164] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:45.992 [INFO][4164] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:45.999 [INFO][4164] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:46.002 [INFO][4164] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:46.002 [INFO][4164] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:46.055 [INFO][4164] ipam/ipam_block_reader_writer.go 231: The block already exists, getting it from data store affinityType="host" host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:46.060 [INFO][4164] ipam/ipam_block_reader_writer.go 247: Block is already claimed by this host, confirm the affinity affinityType="host" host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804434 containerd[1579]: 2025-08-13 00:17:46.060 [INFO][4164] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804806 containerd[1579]: 2025-08-13 00:17:46.083 [ERROR][4164] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(localhost-192-168-88-128-26) Name="localhost-192-168-88-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"localhost", Type:"host", CIDR:"192.168.88.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "localhost-192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Aug 13 00:17:46.804806 containerd[1579]: 2025-08-13 00:17:46.086 [INFO][4164] ipam/ipam_block_reader_writer.go 292: Affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804806 containerd[1579]: 2025-08-13 00:17:46.086 [INFO][4164] ipam/ipam.go 607: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Aug 13 00:17:46.804806 containerd[1579]: 2025-08-13 00:17:46.086 [INFO][4164] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.804806 containerd[1579]: 2025-08-13 00:17:46.126 [INFO][4164] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2 Aug 13 00:17:46.804806 containerd[1579]: 2025-08-13 00:17:46.145 [INFO][4164] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.148 [ERROR][4164] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-88-128-26) Name="192-168-88-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.88.128/26", Affinity:(*string)(0xc0005305e0), Allocations:[]*int{(*int)(0xc00068d498), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc00004f760), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8489667c94-t6zfc", "timestamp":"2025-08-13 00:17:45.408705271 +0000 UTC"}}}, SequenceNumber:0x185b2b76176f0978, SequenceNumberForAllocation:map[string]uint64{"0":0x185b2b76176f0977}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.148 [INFO][4164] ipam/ipam.go 1247: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.360 [INFO][4164] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.362 [INFO][4164] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2 Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.549 [INFO][4164] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.609 [INFO][4164] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.609 [INFO][4164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" host="localhost" Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.609 [INFO][4164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.609 [INFO][4164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:46.805030 containerd[1579]: 2025-08-13 00:17:46.617 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8489667c94--t6zfc-eth0", GenerateName:"whisker-8489667c94-", Namespace:"calico-system", SelfLink:"", UID:"6c80d32a-9cf7-4576-9c40-eec184d75b4d", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8489667c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8489667c94-t6zfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f492bdc3c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:46.805425 containerd[1579]: 2025-08-13 00:17:46.618 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:46.805425 containerd[1579]: 2025-08-13 00:17:46.618 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f492bdc3c0 ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:46.805425 containerd[1579]: 2025-08-13 00:17:46.652 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:46.805425 containerd[1579]: 2025-08-13 00:17:46.653 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8489667c94--t6zfc-eth0", GenerateName:"whisker-8489667c94-", Namespace:"calico-system", SelfLink:"", UID:"6c80d32a-9cf7-4576-9c40-eec184d75b4d", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8489667c94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2", Pod:"whisker-8489667c94-t6zfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f492bdc3c0", MAC:"52:47:9f:6d:90:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:46.805425 containerd[1579]: 2025-08-13 00:17:46.798 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Namespace="calico-system" Pod="whisker-8489667c94-t6zfc" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:46.846115 systemd-networkd[1496]: cali0c2e6832153: Link UP Aug 13 00:17:46.849924 systemd-networkd[1496]: cali0c2e6832153: Gained carrier Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:45.194 [INFO][4124] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:45.302 [INFO][4124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--v8jth-eth0 coredns-674b8bbfcf- kube-system 08cdaf86-0825-4379-a4b9-e5cd5589ebda 827 0 2025-08-13 00:17:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-v8jth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0c2e6832153 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:45.303 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:45.409 [INFO][4162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" HandleID="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Workload="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:45.411 [INFO][4162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" HandleID="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Workload="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-v8jth", "timestamp":"2025-08-13 00:17:45.409372908 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:45.411 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.609 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.609 [INFO][4162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.622 [INFO][4162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.630 [INFO][4162] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.634 [INFO][4162] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.637 [INFO][4162] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.640 [INFO][4162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.640 [INFO][4162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.642 [INFO][4162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9 Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.796 [INFO][4162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.831 [INFO][4162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.831 [INFO][4162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" host="localhost" Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.831 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:46.889774 containerd[1579]: 2025-08-13 00:17:46.831 [INFO][4162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" HandleID="k8s-pod-network.671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Workload="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" Aug 13 00:17:46.891053 containerd[1579]: 2025-08-13 00:17:46.840 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v8jth-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"08cdaf86-0825-4379-a4b9-e5cd5589ebda", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-v8jth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c2e6832153", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:46.891053 containerd[1579]: 2025-08-13 00:17:46.840 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" Aug 13 00:17:46.891053 containerd[1579]: 2025-08-13 00:17:46.840 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c2e6832153 ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" Aug 13 00:17:46.891053 containerd[1579]: 2025-08-13 00:17:46.850 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" Aug 13 00:17:46.891053 containerd[1579]: 2025-08-13 00:17:46.852 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v8jth-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"08cdaf86-0825-4379-a4b9-e5cd5589ebda", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9", Pod:"coredns-674b8bbfcf-v8jth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c2e6832153", MAC:"26:cd:3c:86:ed:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:46.891053 containerd[1579]: 2025-08-13 00:17:46.883 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" Namespace="kube-system" Pod="coredns-674b8bbfcf-v8jth" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v8jth-eth0" Aug 13 00:17:47.201609 containerd[1579]: time="2025-08-13T00:17:47.201551423Z" level=info msg="connecting to shim 5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" address="unix:///run/containerd/s/9d16eea02e7c8ad7930cd8e2b998955d5393e787c64d9c488253b21a265275d4" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:47.230899 systemd[1]: Started cri-containerd-5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2.scope - libcontainer container 5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2. Aug 13 00:17:47.235250 containerd[1579]: time="2025-08-13T00:17:47.235187809Z" level=info msg="connecting to shim 671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9" address="unix:///run/containerd/s/cd910e5ff004b1287fc814bdeffae5bd1552beb5c6bd40654c049d02a2d5df39" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:47.250812 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:47.261240 systemd[1]: Started cri-containerd-671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9.scope - libcontainer container 671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9. Aug 13 00:17:47.275717 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:47.323222 containerd[1579]: time="2025-08-13T00:17:47.323151853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8489667c94-t6zfc,Uid:6c80d32a-9cf7-4576-9c40-eec184d75b4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\"" Aug 13 00:17:47.330001 containerd[1579]: time="2025-08-13T00:17:47.329957241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:17:47.350592 containerd[1579]: time="2025-08-13T00:17:47.350549102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v8jth,Uid:08cdaf86-0825-4379-a4b9-e5cd5589ebda,Namespace:kube-system,Attempt:0,} returns sandbox id \"671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9\"" Aug 13 00:17:47.352475 kubelet[2757]: E0813 00:17:47.352439 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:47.456802 containerd[1579]: time="2025-08-13T00:17:47.456621838Z" level=info msg="CreateContainer within sandbox \"671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:17:47.619920 containerd[1579]: time="2025-08-13T00:17:47.619842502Z" level=info msg="Container 69ec5f2119e7b504d5d8fc7187b611a2fd4d9fc2f33b7f7abd6bb40b865b7fce: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:47.645886 containerd[1579]: time="2025-08-13T00:17:47.645809728Z" level=info msg="CreateContainer within sandbox \"671cae33788b2de255c64925e962626812be3e6ecd7491e6733392722ddc25c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69ec5f2119e7b504d5d8fc7187b611a2fd4d9fc2f33b7f7abd6bb40b865b7fce\"" Aug 13 00:17:47.646683 containerd[1579]: time="2025-08-13T00:17:47.646412648Z" level=info msg="StartContainer for \"69ec5f2119e7b504d5d8fc7187b611a2fd4d9fc2f33b7f7abd6bb40b865b7fce\"" Aug 13 00:17:47.647509 containerd[1579]: time="2025-08-13T00:17:47.647487346Z" level=info msg="connecting to shim 69ec5f2119e7b504d5d8fc7187b611a2fd4d9fc2f33b7f7abd6bb40b865b7fce" address="unix:///run/containerd/s/cd910e5ff004b1287fc814bdeffae5bd1552beb5c6bd40654c049d02a2d5df39" protocol=ttrpc version=3 Aug 13 00:17:47.671881 systemd[1]: Started cri-containerd-69ec5f2119e7b504d5d8fc7187b611a2fd4d9fc2f33b7f7abd6bb40b865b7fce.scope - libcontainer container 69ec5f2119e7b504d5d8fc7187b611a2fd4d9fc2f33b7f7abd6bb40b865b7fce. Aug 13 00:17:47.711380 systemd-networkd[1496]: cali8f492bdc3c0: Gained IPv6LL Aug 13 00:17:47.781250 containerd[1579]: time="2025-08-13T00:17:47.781206421Z" level=info msg="StartContainer for \"69ec5f2119e7b504d5d8fc7187b611a2fd4d9fc2f33b7f7abd6bb40b865b7fce\" returns successfully" Aug 13 00:17:47.876166 systemd-networkd[1496]: vxlan.calico: Link UP Aug 13 00:17:47.876175 systemd-networkd[1496]: vxlan.calico: Gained carrier Aug 13 00:17:48.160649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983033125.mount: Deactivated successfully. Aug 13 00:17:48.413859 systemd-networkd[1496]: cali0c2e6832153: Gained IPv6LL Aug 13 00:17:48.732364 kubelet[2757]: E0813 00:17:48.732139 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:48.825380 kubelet[2757]: I0813 00:17:48.825310 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v8jth" podStartSLOduration=43.825288477 podStartE2EDuration="43.825288477s" podCreationTimestamp="2025-08-13 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:48.824692862 +0000 UTC m=+49.349242719" watchObservedRunningTime="2025-08-13 00:17:48.825288477 +0000 UTC m=+49.349838334" Aug 13 00:17:49.206727 kernel: hrtimer: interrupt took 4490269 ns Aug 13 00:17:49.734310 kubelet[2757]: E0813 00:17:49.734252 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:49.886070 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL Aug 13 00:17:51.451048 containerd[1579]: time="2025-08-13T00:17:51.449146584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:51.451048 containerd[1579]: time="2025-08-13T00:17:51.451012385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 00:17:51.462816 containerd[1579]: time="2025-08-13T00:17:51.461772367Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:51.465032 containerd[1579]: time="2025-08-13T00:17:51.464990834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:51.469272 containerd[1579]: time="2025-08-13T00:17:51.466940988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 4.136944812s" Aug 13 00:17:51.469272 containerd[1579]: time="2025-08-13T00:17:51.468695976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:17:51.488532 containerd[1579]: time="2025-08-13T00:17:51.488478416Z" level=info msg="CreateContainer within sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:17:51.524557 containerd[1579]: time="2025-08-13T00:17:51.524484264Z" level=info msg="Container 04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:51.554114 containerd[1579]: time="2025-08-13T00:17:51.554047265Z" level=info msg="CreateContainer within sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\"" Aug 13 00:17:51.555757 containerd[1579]: time="2025-08-13T00:17:51.554846930Z" level=info msg="StartContainer for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\"" Aug 13 00:17:51.556342 containerd[1579]: time="2025-08-13T00:17:51.556308575Z" level=info msg="connecting to shim 04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff" address="unix:///run/containerd/s/9d16eea02e7c8ad7930cd8e2b998955d5393e787c64d9c488253b21a265275d4" protocol=ttrpc version=3 Aug 13 00:17:51.620026 systemd[1]: Started cri-containerd-04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff.scope - libcontainer container 04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff. Aug 13 00:17:51.685073 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:34802.service - OpenSSH per-connection server daemon (10.0.0.1:34802). Aug 13 00:17:51.780793 containerd[1579]: time="2025-08-13T00:17:51.780647670Z" level=info msg="StartContainer for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" returns successfully" Aug 13 00:17:51.783613 containerd[1579]: time="2025-08-13T00:17:51.783570851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:17:51.920084 sshd[4610]: Accepted publickey for core from 10.0.0.1 port 34802 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:17:51.920278 sshd-session[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:17:51.929450 systemd-logind[1557]: New session 9 of user core. Aug 13 00:17:51.936990 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:17:52.091308 sshd[4620]: Connection closed by 10.0.0.1 port 34802 Aug 13 00:17:52.091730 sshd-session[4610]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:52.096107 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:34802.service: Deactivated successfully. Aug 13 00:17:52.098247 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:17:52.099856 systemd-logind[1557]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:17:52.100826 systemd-logind[1557]: Removed session 9. Aug 13 00:17:54.146362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176555335.mount: Deactivated successfully. Aug 13 00:17:54.175066 containerd[1579]: time="2025-08-13T00:17:54.174995724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:54.176001 containerd[1579]: time="2025-08-13T00:17:54.175944150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 00:17:54.177715 containerd[1579]: time="2025-08-13T00:17:54.177618719Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:54.180446 containerd[1579]: time="2025-08-13T00:17:54.180412130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:54.181162 containerd[1579]: time="2025-08-13T00:17:54.181134675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.397527834s" Aug 13 00:17:54.181215 containerd[1579]: time="2025-08-13T00:17:54.181165404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:17:54.202482 containerd[1579]: time="2025-08-13T00:17:54.202409443Z" level=info msg="CreateContainer within sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:17:54.213531 containerd[1579]: time="2025-08-13T00:17:54.213459063Z" level=info msg="Container 54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:54.293198 containerd[1579]: time="2025-08-13T00:17:54.293136632Z" level=info msg="CreateContainer within sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\"" Aug 13 00:17:54.300429 containerd[1579]: time="2025-08-13T00:17:54.293967494Z" level=info msg="StartContainer for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\"" Aug 13 00:17:54.300429 containerd[1579]: time="2025-08-13T00:17:54.295534486Z" level=info msg="connecting to shim 54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625" address="unix:///run/containerd/s/9d16eea02e7c8ad7930cd8e2b998955d5393e787c64d9c488253b21a265275d4" protocol=ttrpc version=3 Aug 13 00:17:54.329844 systemd[1]: Started cri-containerd-54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625.scope - libcontainer container 54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625. Aug 13 00:17:54.545326 containerd[1579]: time="2025-08-13T00:17:54.545145530Z" level=info msg="StartContainer for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" returns successfully" Aug 13 00:17:54.581006 containerd[1579]: time="2025-08-13T00:17:54.580961858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkrpf,Uid:3471d1d5-f5a3-4dec-8445-1557d86e0087,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:54.734251 systemd-networkd[1496]: cali4b866684d01: Link UP Aug 13 00:17:54.735312 systemd-networkd[1496]: cali4b866684d01: Gained carrier Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.666 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wkrpf-eth0 csi-node-driver- calico-system 3471d1d5-f5a3-4dec-8445-1557d86e0087 712 0 2025-08-13 00:17:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wkrpf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4b866684d01 [] [] }} ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.666 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-eth0" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.695 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" HandleID="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Workload="localhost-k8s-csi--node--driver--wkrpf-eth0" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.695 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" HandleID="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Workload="localhost-k8s-csi--node--driver--wkrpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b6b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wkrpf", "timestamp":"2025-08-13 00:17:54.695736888 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.696 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.696 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.696 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.703 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.708 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.711 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.714 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.716 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.716 [INFO][4699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.718 [INFO][4699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.722 [INFO][4699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.727 [INFO][4699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.727 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" host="localhost" Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.727 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:54.758827 containerd[1579]: 2025-08-13 00:17:54.727 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" HandleID="k8s-pod-network.c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Workload="localhost-k8s-csi--node--driver--wkrpf-eth0" Aug 13 00:17:54.759620 containerd[1579]: 2025-08-13 00:17:54.731 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wkrpf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3471d1d5-f5a3-4dec-8445-1557d86e0087", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wkrpf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b866684d01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:54.759620 containerd[1579]: 2025-08-13 00:17:54.731 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-eth0" Aug 13 00:17:54.759620 containerd[1579]: 2025-08-13 00:17:54.731 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b866684d01 ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-eth0" Aug 13 00:17:54.759620 containerd[1579]: 2025-08-13 00:17:54.734 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-eth0" Aug 13 00:17:54.759620 containerd[1579]: 2025-08-13 00:17:54.735 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wkrpf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3471d1d5-f5a3-4dec-8445-1557d86e0087", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b", Pod:"csi-node-driver-wkrpf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b866684d01", MAC:"e2:13:41:d0:40:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:54.759620 containerd[1579]: 2025-08-13 00:17:54.754 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" Namespace="calico-system" Pod="csi-node-driver-wkrpf" WorkloadEndpoint="localhost-k8s-csi--node--driver--wkrpf-eth0" Aug 13 00:17:54.794476 containerd[1579]: time="2025-08-13T00:17:54.794422444Z" level=info msg="connecting to shim c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b" address="unix:///run/containerd/s/7b3c4dd9e59c6c9c18ba403e94ee63bd10aab51ce250ad08a0d7e7156d882873" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:54.828915 systemd[1]: Started cri-containerd-c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b.scope - libcontainer container c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b. Aug 13 00:17:54.836818 containerd[1579]: time="2025-08-13T00:17:54.836764474Z" level=info msg="StopContainer for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" with timeout 30 (s)" Aug 13 00:17:54.836983 containerd[1579]: time="2025-08-13T00:17:54.836916385Z" level=info msg="StopContainer for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" with timeout 30 (s)" Aug 13 00:17:54.842910 containerd[1579]: time="2025-08-13T00:17:54.842827710Z" level=info msg="Stop container \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" with signal terminated" Aug 13 00:17:54.847321 containerd[1579]: time="2025-08-13T00:17:54.847249813Z" level=info msg="Stop container \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" with signal terminated" Aug 13 00:17:54.863053 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:54.863552 systemd[1]: cri-containerd-54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625.scope: Deactivated successfully. Aug 13 00:17:54.869256 containerd[1579]: time="2025-08-13T00:17:54.869207259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" id:\"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" pid:4663 exit_status:2 exited_at:{seconds:1755044274 nanos:868777356}" Aug 13 00:17:54.869457 containerd[1579]: time="2025-08-13T00:17:54.869396552Z" level=info msg="received exit event container_id:\"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" id:\"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" pid:4663 exit_status:2 exited_at:{seconds:1755044274 nanos:868777356}" Aug 13 00:17:54.888193 systemd[1]: cri-containerd-04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff.scope: Deactivated successfully. Aug 13 00:17:54.890320 containerd[1579]: time="2025-08-13T00:17:54.890277997Z" level=info msg="received exit event container_id:\"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" id:\"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" pid:4596 exited_at:{seconds:1755044274 nanos:889979095}" Aug 13 00:17:54.891107 containerd[1579]: time="2025-08-13T00:17:54.891074113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" id:\"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" pid:4596 exited_at:{seconds:1755044274 nanos:889979095}" Aug 13 00:17:54.896142 containerd[1579]: time="2025-08-13T00:17:54.895882174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wkrpf,Uid:3471d1d5-f5a3-4dec-8445-1557d86e0087,Namespace:calico-system,Attempt:0,} returns sandbox id \"c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b\"" Aug 13 00:17:54.899802 containerd[1579]: time="2025-08-13T00:17:54.899664912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:17:54.908176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625-rootfs.mount: Deactivated successfully. Aug 13 00:17:54.922168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff-rootfs.mount: Deactivated successfully. Aug 13 00:17:55.113569 containerd[1579]: time="2025-08-13T00:17:55.113052022Z" level=info msg="StopContainer for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" returns successfully" Aug 13 00:17:55.114047 containerd[1579]: time="2025-08-13T00:17:55.114002292Z" level=info msg="StopContainer for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" returns successfully" Aug 13 00:17:55.114546 containerd[1579]: time="2025-08-13T00:17:55.114501778Z" level=info msg="StopPodSandbox for \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\"" Aug 13 00:17:55.128778 containerd[1579]: time="2025-08-13T00:17:55.128721135Z" level=info msg="Container to stop \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:17:55.128778 containerd[1579]: time="2025-08-13T00:17:55.128748116Z" level=info msg="Container to stop \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:17:55.137462 systemd[1]: cri-containerd-5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2.scope: Deactivated successfully. Aug 13 00:17:55.138855 containerd[1579]: time="2025-08-13T00:17:55.138810563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" id:\"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" pid:4295 exit_status:137 exited_at:{seconds:1755044275 nanos:137731426}" Aug 13 00:17:55.169862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2-rootfs.mount: Deactivated successfully. Aug 13 00:17:55.177795 containerd[1579]: time="2025-08-13T00:17:55.175494153Z" level=info msg="shim disconnected" id=5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2 namespace=k8s.io Aug 13 00:17:55.177795 containerd[1579]: time="2025-08-13T00:17:55.175524731Z" level=warning msg="cleaning up after shim disconnected" id=5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2 namespace=k8s.io Aug 13 00:17:55.179161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2-shm.mount: Deactivated successfully. Aug 13 00:17:55.207441 containerd[1579]: time="2025-08-13T00:17:55.175532185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:17:55.215531 containerd[1579]: time="2025-08-13T00:17:55.215465096Z" level=info msg="received exit event sandbox_id:\"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" exit_status:137 exited_at:{seconds:1755044275 nanos:137731426}" Aug 13 00:17:55.232439 kubelet[2757]: I0813 00:17:55.231137 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8489667c94-t6zfc" podStartSLOduration=28.378055853 podStartE2EDuration="35.231111575s" podCreationTimestamp="2025-08-13 00:17:20 +0000 UTC" firstStartedPulling="2025-08-13 00:17:47.328842165 +0000 UTC m=+47.853392022" lastFinishedPulling="2025-08-13 00:17:54.181897887 +0000 UTC m=+54.706447744" observedRunningTime="2025-08-13 00:17:54.853064042 +0000 UTC m=+55.377613899" watchObservedRunningTime="2025-08-13 00:17:55.231111575 +0000 UTC m=+55.755661432" Aug 13 00:17:55.234400 systemd-networkd[1496]: cali8f492bdc3c0: Link DOWN Aug 13 00:17:55.234409 systemd-networkd[1496]: cali8f492bdc3c0: Lost carrier Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.230 [INFO][4841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.231 [INFO][4841] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" iface="eth0" netns="/var/run/netns/cni-a05ae86b-905a-5a65-0603-e4680623e673" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.231 [INFO][4841] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" iface="eth0" netns="/var/run/netns/cni-a05ae86b-905a-5a65-0603-e4680623e673" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.240 [INFO][4841] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" after=8.954315ms iface="eth0" netns="/var/run/netns/cni-a05ae86b-905a-5a65-0603-e4680623e673" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.240 [INFO][4841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.240 [INFO][4841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.270 [INFO][4866] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.270 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.270 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.306 [INFO][4866] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.307 [INFO][4866] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.309 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:55.316324 containerd[1579]: 2025-08-13 00:17:55.313 [INFO][4841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:55.326928 containerd[1579]: time="2025-08-13T00:17:55.326853957Z" level=info msg="TearDown network for sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" successfully" Aug 13 00:17:55.326928 containerd[1579]: time="2025-08-13T00:17:55.326911978Z" level=info msg="StopPodSandbox for \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" returns successfully" Aug 13 00:17:55.488337 kubelet[2757]: I0813 00:17:55.488206 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-backend-key-pair\") pod \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\" (UID: \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\") " Aug 13 00:17:55.488337 kubelet[2757]: I0813 00:17:55.488282 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-ca-bundle\") pod \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\" (UID: \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\") " Aug 13 00:17:55.488337 kubelet[2757]: I0813 00:17:55.488308 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xhzw\" (UniqueName: \"kubernetes.io/projected/6c80d32a-9cf7-4576-9c40-eec184d75b4d-kube-api-access-2xhzw\") pod \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\" (UID: \"6c80d32a-9cf7-4576-9c40-eec184d75b4d\") " Aug 13 00:17:55.489041 kubelet[2757]: I0813 00:17:55.488993 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6c80d32a-9cf7-4576-9c40-eec184d75b4d" (UID: "6c80d32a-9cf7-4576-9c40-eec184d75b4d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:17:55.492130 kubelet[2757]: I0813 00:17:55.492096 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c80d32a-9cf7-4576-9c40-eec184d75b4d-kube-api-access-2xhzw" (OuterVolumeSpecName: "kube-api-access-2xhzw") pod "6c80d32a-9cf7-4576-9c40-eec184d75b4d" (UID: "6c80d32a-9cf7-4576-9c40-eec184d75b4d"). InnerVolumeSpecName "kube-api-access-2xhzw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:17:55.492199 kubelet[2757]: I0813 00:17:55.492125 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6c80d32a-9cf7-4576-9c40-eec184d75b4d" (UID: "6c80d32a-9cf7-4576-9c40-eec184d75b4d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:17:55.580349 containerd[1579]: time="2025-08-13T00:17:55.580295130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-94mpd,Uid:5bcc2c49-47f9-4583-9324-281976f6433c,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:55.587805 systemd[1]: Removed slice kubepods-besteffort-pod6c80d32a_9cf7_4576_9c40_eec184d75b4d.slice - libcontainer container kubepods-besteffort-pod6c80d32a_9cf7_4576_9c40_eec184d75b4d.slice. Aug 13 00:17:55.588553 kubelet[2757]: I0813 00:17:55.588529 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:55.588553 kubelet[2757]: I0813 00:17:55.588553 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c80d32a-9cf7-4576-9c40-eec184d75b4d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:55.588641 kubelet[2757]: I0813 00:17:55.588563 2757 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xhzw\" (UniqueName: \"kubernetes.io/projected/6c80d32a-9cf7-4576-9c40-eec184d75b4d-kube-api-access-2xhzw\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:55.832563 systemd[1]: run-netns-cni\x2da05ae86b\x2d905a\x2d5a65\x2d0603\x2de4680623e673.mount: Deactivated successfully. Aug 13 00:17:55.832704 systemd[1]: var-lib-kubelet-pods-6c80d32a\x2d9cf7\x2d4576\x2d9c40\x2deec184d75b4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xhzw.mount: Deactivated successfully. Aug 13 00:17:55.832816 systemd[1]: var-lib-kubelet-pods-6c80d32a\x2d9cf7\x2d4576\x2d9c40\x2deec184d75b4d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:17:55.838231 kubelet[2757]: I0813 00:17:55.838173 2757 scope.go:117] "RemoveContainer" containerID="54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625" Aug 13 00:17:55.840319 containerd[1579]: time="2025-08-13T00:17:55.840272088Z" level=info msg="RemoveContainer for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\"" Aug 13 00:17:55.876461 systemd-networkd[1496]: cali4005bff9389: Link UP Aug 13 00:17:55.879255 systemd-networkd[1496]: cali4005bff9389: Gained carrier Aug 13 00:17:55.949538 containerd[1579]: time="2025-08-13T00:17:55.949477177Z" level=info msg="RemoveContainer for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" returns successfully" Aug 13 00:17:55.949850 kubelet[2757]: I0813 00:17:55.949802 2757 scope.go:117] "RemoveContainer" containerID="04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff" Aug 13 00:17:55.952154 containerd[1579]: time="2025-08-13T00:17:55.952084790Z" level=info msg="RemoveContainer for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\"" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.727 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--94mpd-eth0 goldmane-768f4c5c69- calico-system 5bcc2c49-47f9-4583-9324-281976f6433c 832 0 2025-08-13 00:17:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-94mpd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4005bff9389 [] [] }} ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.727 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.749 [INFO][4897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" HandleID="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Workload="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.749 [INFO][4897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" HandleID="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Workload="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-94mpd", "timestamp":"2025-08-13 00:17:55.749245926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.749 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.749 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.749 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.755 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.760 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.763 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.765 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.766 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.766 [INFO][4897] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.768 [INFO][4897] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950 Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.830 [INFO][4897] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.870 [INFO][4897] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.870 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" host="localhost" Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.870 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:56.001292 containerd[1579]: 2025-08-13 00:17:55.870 [INFO][4897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" HandleID="k8s-pod-network.721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Workload="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" Aug 13 00:17:56.001910 containerd[1579]: 2025-08-13 00:17:55.873 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--94mpd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5bcc2c49-47f9-4583-9324-281976f6433c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-94mpd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4005bff9389", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.001910 containerd[1579]: 2025-08-13 00:17:55.874 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" Aug 13 00:17:56.001910 containerd[1579]: 2025-08-13 00:17:55.874 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4005bff9389 ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" Aug 13 00:17:56.001910 containerd[1579]: 2025-08-13 00:17:55.876 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" Aug 13 00:17:56.001910 containerd[1579]: 2025-08-13 00:17:55.881 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--94mpd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"5bcc2c49-47f9-4583-9324-281976f6433c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950", Pod:"goldmane-768f4c5c69-94mpd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4005bff9389", MAC:"8e:8f:08:13:4f:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.001910 containerd[1579]: 2025-08-13 00:17:55.996 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" Namespace="calico-system" Pod="goldmane-768f4c5c69-94mpd" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--94mpd-eth0" Aug 13 00:17:56.077409 containerd[1579]: time="2025-08-13T00:17:56.077312377Z" level=info msg="RemoveContainer for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" returns successfully" Aug 13 00:17:56.077621 kubelet[2757]: I0813 00:17:56.077595 2757 scope.go:117] "RemoveContainer" containerID="54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625" Aug 13 00:17:56.077947 containerd[1579]: time="2025-08-13T00:17:56.077901414Z" level=error msg="ContainerStatus for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\": not found" Aug 13 00:17:56.078096 kubelet[2757]: E0813 00:17:56.078070 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\": not found" containerID="54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625" Aug 13 00:17:56.078167 kubelet[2757]: I0813 00:17:56.078105 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625"} err="failed to get container status \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\": rpc error: code = NotFound desc = an error occurred when try to find container \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\": not found" Aug 13 00:17:56.078203 kubelet[2757]: I0813 00:17:56.078172 2757 scope.go:117] "RemoveContainer" containerID="04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff" Aug 13 00:17:56.078708 containerd[1579]: time="2025-08-13T00:17:56.078394758Z" level=error msg="ContainerStatus for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\": not found" Aug 13 00:17:56.078757 kubelet[2757]: E0813 00:17:56.078554 2757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\": not found" containerID="04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff" Aug 13 00:17:56.078757 kubelet[2757]: I0813 00:17:56.078587 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff"} err="failed to get container status \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\": not found" Aug 13 00:17:56.078757 kubelet[2757]: I0813 00:17:56.078601 2757 scope.go:117] "RemoveContainer" containerID="54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625" Aug 13 00:17:56.078929 containerd[1579]: time="2025-08-13T00:17:56.078880608Z" level=error msg="ContainerStatus for \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\": not found" Aug 13 00:17:56.079348 kubelet[2757]: I0813 00:17:56.079199 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625"} err="failed to get container status \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\": rpc error: code = NotFound desc = an error occurred when try to find container \"54a35f52e6c16f589eebdd7eab1a9e2c336651934e63d72868a8779ace444625\": not found" Aug 13 00:17:56.079348 kubelet[2757]: I0813 00:17:56.079225 2757 scope.go:117] "RemoveContainer" containerID="04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff" Aug 13 00:17:56.080019 containerd[1579]: time="2025-08-13T00:17:56.079978729Z" level=error msg="ContainerStatus for \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\": not found" Aug 13 00:17:56.080377 kubelet[2757]: I0813 00:17:56.080329 2757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff"} err="failed to get container status \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"04189b19163eceda7c9519a6f31f0a6e7a27f8127d8f8fb1de34e5cf385d09ff\": not found" Aug 13 00:17:56.106750 containerd[1579]: time="2025-08-13T00:17:56.106536462Z" level=info msg="connecting to shim 721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950" address="unix:///run/containerd/s/d2229f4e07286c8a71a1f8d9775654e63a69091ce786ec7543282bb60ea6e486" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:56.127689 systemd[1]: Created slice kubepods-besteffort-podec2490c1_08d7_4da8_b7e5_83a2e1baa03f.slice - libcontainer container kubepods-besteffort-podec2490c1_08d7_4da8_b7e5_83a2e1baa03f.slice. Aug 13 00:17:56.171931 systemd[1]: Started cri-containerd-721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950.scope - libcontainer container 721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950. Aug 13 00:17:56.191910 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:56.194898 kubelet[2757]: I0813 00:17:56.194867 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ec2490c1-08d7-4da8-b7e5-83a2e1baa03f-whisker-backend-key-pair\") pod \"whisker-b58978f58-xgcj8\" (UID: \"ec2490c1-08d7-4da8-b7e5-83a2e1baa03f\") " pod="calico-system/whisker-b58978f58-xgcj8" Aug 13 00:17:56.194984 kubelet[2757]: I0813 00:17:56.194911 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtd4\" (UniqueName: \"kubernetes.io/projected/ec2490c1-08d7-4da8-b7e5-83a2e1baa03f-kube-api-access-wrtd4\") pod \"whisker-b58978f58-xgcj8\" (UID: \"ec2490c1-08d7-4da8-b7e5-83a2e1baa03f\") " pod="calico-system/whisker-b58978f58-xgcj8" Aug 13 00:17:56.194984 kubelet[2757]: I0813 00:17:56.194968 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec2490c1-08d7-4da8-b7e5-83a2e1baa03f-whisker-ca-bundle\") pod \"whisker-b58978f58-xgcj8\" (UID: \"ec2490c1-08d7-4da8-b7e5-83a2e1baa03f\") " pod="calico-system/whisker-b58978f58-xgcj8" Aug 13 00:17:56.222438 containerd[1579]: time="2025-08-13T00:17:56.222399636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-94mpd,Uid:5bcc2c49-47f9-4583-9324-281976f6433c,Namespace:calico-system,Attempt:0,} returns sandbox id \"721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950\"" Aug 13 00:17:56.222815 systemd-networkd[1496]: cali4b866684d01: Gained IPv6LL Aug 13 00:17:56.443040 containerd[1579]: time="2025-08-13T00:17:56.442681754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b58978f58-xgcj8,Uid:ec2490c1-08d7-4da8-b7e5-83a2e1baa03f,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:56.534752 systemd-networkd[1496]: cali510a44d77b6: Link UP Aug 13 00:17:56.535440 systemd-networkd[1496]: cali510a44d77b6: Gained carrier Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.478 [INFO][4966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b58978f58--xgcj8-eth0 whisker-b58978f58- calico-system ec2490c1-08d7-4da8-b7e5-83a2e1baa03f 1057 0 2025-08-13 00:17:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b58978f58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b58978f58-xgcj8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali510a44d77b6 [] [] }} ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.478 [INFO][4966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-eth0" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.501 [INFO][4980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" HandleID="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Workload="localhost-k8s-whisker--b58978f58--xgcj8-eth0" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.501 [INFO][4980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" HandleID="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Workload="localhost-k8s-whisker--b58978f58--xgcj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c65d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b58978f58-xgcj8", "timestamp":"2025-08-13 00:17:56.501434427 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.501 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.502 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.502 [INFO][4980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.508 [INFO][4980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.513 [INFO][4980] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.517 [INFO][4980] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.518 [INFO][4980] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.520 [INFO][4980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.520 [INFO][4980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.521 [INFO][4980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.525 [INFO][4980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.529 [INFO][4980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.529 [INFO][4980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" host="localhost" Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.529 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:56.549824 containerd[1579]: 2025-08-13 00:17:56.529 [INFO][4980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" HandleID="k8s-pod-network.12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Workload="localhost-k8s-whisker--b58978f58--xgcj8-eth0" Aug 13 00:17:56.550607 containerd[1579]: 2025-08-13 00:17:56.532 [INFO][4966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b58978f58--xgcj8-eth0", GenerateName:"whisker-b58978f58-", Namespace:"calico-system", SelfLink:"", UID:"ec2490c1-08d7-4da8-b7e5-83a2e1baa03f", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b58978f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b58978f58-xgcj8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali510a44d77b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.550607 containerd[1579]: 2025-08-13 00:17:56.532 [INFO][4966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-eth0" Aug 13 00:17:56.550607 containerd[1579]: 2025-08-13 00:17:56.533 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali510a44d77b6 ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-eth0" Aug 13 00:17:56.550607 containerd[1579]: 2025-08-13 00:17:56.535 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-eth0" Aug 13 00:17:56.550607 containerd[1579]: 2025-08-13 00:17:56.535 [INFO][4966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b58978f58--xgcj8-eth0", GenerateName:"whisker-b58978f58-", Namespace:"calico-system", SelfLink:"", UID:"ec2490c1-08d7-4da8-b7e5-83a2e1baa03f", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b58978f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a", Pod:"whisker-b58978f58-xgcj8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali510a44d77b6", MAC:"32:b4:d0:c8:59:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.550607 containerd[1579]: 2025-08-13 00:17:56.546 [INFO][4966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" Namespace="calico-system" Pod="whisker-b58978f58-xgcj8" WorkloadEndpoint="localhost-k8s-whisker--b58978f58--xgcj8-eth0" Aug 13 00:17:56.578693 containerd[1579]: time="2025-08-13T00:17:56.578617852Z" level=info msg="connecting to shim 12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a" address="unix:///run/containerd/s/ee7fb1ddc5f712cca7f4fd0fd67285034f667bac4e889d3458cff0c44f283157" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:56.581109 containerd[1579]: time="2025-08-13T00:17:56.580990542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-fq74k,Uid:2820e684-a527-4c3d-a8a0-b491fc1ad579,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:56.610948 systemd[1]: Started cri-containerd-12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a.scope - libcontainer container 12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a. Aug 13 00:17:56.630169 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:56.837139 systemd-networkd[1496]: cali93cdb39c998: Link UP Aug 13 00:17:56.838107 systemd-networkd[1496]: cali93cdb39c998: Gained carrier Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.620 [INFO][5021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0 calico-apiserver-84464dd98- calico-apiserver 2820e684-a527-4c3d-a8a0-b491fc1ad579 834 0 2025-08-13 00:17:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84464dd98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84464dd98-fq74k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali93cdb39c998 [] [] }} ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.620 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.683 [INFO][5055] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" HandleID="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Workload="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.683 [INFO][5055] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" HandleID="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Workload="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84464dd98-fq74k", "timestamp":"2025-08-13 00:17:56.683003211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.683 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.683 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.683 [INFO][5055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.691 [INFO][5055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.695 [INFO][5055] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.702 [INFO][5055] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.705 [INFO][5055] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.707 [INFO][5055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.707 [INFO][5055] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.708 [INFO][5055] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684 Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.779 [INFO][5055] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.797 [INFO][5055] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.829 [INFO][5055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" host="localhost" Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.829 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:56.913123 containerd[1579]: 2025-08-13 00:17:56.829 [INFO][5055] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" HandleID="k8s-pod-network.7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Workload="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" Aug 13 00:17:56.913946 containerd[1579]: 2025-08-13 00:17:56.832 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0", GenerateName:"calico-apiserver-84464dd98-", Namespace:"calico-apiserver", SelfLink:"", UID:"2820e684-a527-4c3d-a8a0-b491fc1ad579", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84464dd98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84464dd98-fq74k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93cdb39c998", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.913946 containerd[1579]: 2025-08-13 00:17:56.832 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" Aug 13 00:17:56.913946 containerd[1579]: 2025-08-13 00:17:56.833 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93cdb39c998 ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" Aug 13 00:17:56.913946 containerd[1579]: 2025-08-13 00:17:56.837 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" Aug 13 00:17:56.913946 containerd[1579]: 2025-08-13 00:17:56.837 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0", GenerateName:"calico-apiserver-84464dd98-", Namespace:"calico-apiserver", SelfLink:"", UID:"2820e684-a527-4c3d-a8a0-b491fc1ad579", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84464dd98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684", Pod:"calico-apiserver-84464dd98-fq74k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93cdb39c998", MAC:"d6:43:5c:3d:12:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.913946 containerd[1579]: 2025-08-13 00:17:56.909 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-fq74k" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--fq74k-eth0" Aug 13 00:17:57.003473 containerd[1579]: time="2025-08-13T00:17:57.003407517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b58978f58-xgcj8,Uid:ec2490c1-08d7-4da8-b7e5-83a2e1baa03f,Namespace:calico-system,Attempt:0,} returns sandbox id \"12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a\"" Aug 13 00:17:57.042615 containerd[1579]: time="2025-08-13T00:17:57.042566143Z" level=info msg="CreateContainer within sandbox \"12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:17:57.068412 containerd[1579]: time="2025-08-13T00:17:57.068021861Z" level=info msg="Container 0593d6ceadc196dc5bf5d4b526b2f4cf1e52f35f5fcd8176cec7b85df195396e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:57.079350 containerd[1579]: time="2025-08-13T00:17:57.079303078Z" level=info msg="CreateContainer within sandbox \"12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0593d6ceadc196dc5bf5d4b526b2f4cf1e52f35f5fcd8176cec7b85df195396e\"" Aug 13 00:17:57.082355 containerd[1579]: time="2025-08-13T00:17:57.082257448Z" level=info msg="StartContainer for \"0593d6ceadc196dc5bf5d4b526b2f4cf1e52f35f5fcd8176cec7b85df195396e\"" Aug 13 00:17:57.083096 containerd[1579]: time="2025-08-13T00:17:57.083059714Z" level=info msg="connecting to shim 7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684" address="unix:///run/containerd/s/b44c5d437ebbeb8c085ff87fd8f508db5d8c0bbdec91984c47e096b3631f12e2" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:57.084872 containerd[1579]: time="2025-08-13T00:17:57.084796084Z" level=info msg="connecting to shim 0593d6ceadc196dc5bf5d4b526b2f4cf1e52f35f5fcd8176cec7b85df195396e" address="unix:///run/containerd/s/ee7fb1ddc5f712cca7f4fd0fd67285034f667bac4e889d3458cff0c44f283157" protocol=ttrpc version=3 Aug 13 00:17:57.120829 systemd[1]: Started cri-containerd-0593d6ceadc196dc5bf5d4b526b2f4cf1e52f35f5fcd8176cec7b85df195396e.scope - libcontainer container 0593d6ceadc196dc5bf5d4b526b2f4cf1e52f35f5fcd8176cec7b85df195396e. Aug 13 00:17:57.123398 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:34804.service - OpenSSH per-connection server daemon (10.0.0.1:34804). Aug 13 00:17:57.135967 systemd[1]: Started cri-containerd-7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684.scope - libcontainer container 7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684. Aug 13 00:17:57.154163 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:57.182470 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 34804 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:17:57.183702 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:17:57.192500 systemd-logind[1557]: New session 10 of user core. Aug 13 00:17:57.198016 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:17:57.219551 containerd[1579]: time="2025-08-13T00:17:57.219502720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-fq74k,Uid:2820e684-a527-4c3d-a8a0-b491fc1ad579,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684\"" Aug 13 00:17:57.224836 containerd[1579]: time="2025-08-13T00:17:57.224792608Z" level=info msg="StartContainer for \"0593d6ceadc196dc5bf5d4b526b2f4cf1e52f35f5fcd8176cec7b85df195396e\" returns successfully" Aug 13 00:17:57.232075 containerd[1579]: time="2025-08-13T00:17:57.232030321Z" level=info msg="CreateContainer within sandbox \"12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:17:57.245543 containerd[1579]: time="2025-08-13T00:17:57.245497748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:57.249029 containerd[1579]: time="2025-08-13T00:17:57.248995048Z" level=info msg="Container 9a9727b192a33782fc264b526c192289ec3352d1768dedcb949b14bdabcfd89d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:57.251123 containerd[1579]: time="2025-08-13T00:17:57.251093732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 00:17:57.255756 containerd[1579]: time="2025-08-13T00:17:57.255726893Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:57.259077 containerd[1579]: time="2025-08-13T00:17:57.259042174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:57.260709 containerd[1579]: time="2025-08-13T00:17:57.260672954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.360946785s" Aug 13 00:17:57.260709 containerd[1579]: time="2025-08-13T00:17:57.260705857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:17:57.263766 containerd[1579]: time="2025-08-13T00:17:57.263727736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:17:57.264544 containerd[1579]: time="2025-08-13T00:17:57.264447493Z" level=info msg="CreateContainer within sandbox \"12a868d248930891435b3c61f83641b1b9be40edee97ce83122344a9b062f83a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9a9727b192a33782fc264b526c192289ec3352d1768dedcb949b14bdabcfd89d\"" Aug 13 00:17:57.265469 containerd[1579]: time="2025-08-13T00:17:57.265429852Z" level=info msg="StartContainer for \"9a9727b192a33782fc264b526c192289ec3352d1768dedcb949b14bdabcfd89d\"" Aug 13 00:17:57.267455 containerd[1579]: time="2025-08-13T00:17:57.267388869Z" level=info msg="CreateContainer within sandbox \"c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:17:57.268144 containerd[1579]: time="2025-08-13T00:17:57.268098146Z" level=info msg="connecting to shim 9a9727b192a33782fc264b526c192289ec3352d1768dedcb949b14bdabcfd89d" address="unix:///run/containerd/s/ee7fb1ddc5f712cca7f4fd0fd67285034f667bac4e889d3458cff0c44f283157" protocol=ttrpc version=3 Aug 13 00:17:57.293815 systemd[1]: Started cri-containerd-9a9727b192a33782fc264b526c192289ec3352d1768dedcb949b14bdabcfd89d.scope - libcontainer container 9a9727b192a33782fc264b526c192289ec3352d1768dedcb949b14bdabcfd89d. Aug 13 00:17:57.311183 containerd[1579]: time="2025-08-13T00:17:57.311094632Z" level=info msg="Container 8a71bd29291b48a1b5556f435b290354f9da8ca9cfbf500e3797eda157868a49: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:57.329291 containerd[1579]: time="2025-08-13T00:17:57.329084190Z" level=info msg="CreateContainer within sandbox \"c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a71bd29291b48a1b5556f435b290354f9da8ca9cfbf500e3797eda157868a49\"" Aug 13 00:17:57.331311 containerd[1579]: time="2025-08-13T00:17:57.330385970Z" level=info msg="StartContainer for \"8a71bd29291b48a1b5556f435b290354f9da8ca9cfbf500e3797eda157868a49\"" Aug 13 00:17:57.332030 containerd[1579]: time="2025-08-13T00:17:57.331994166Z" level=info msg="connecting to shim 8a71bd29291b48a1b5556f435b290354f9da8ca9cfbf500e3797eda157868a49" address="unix:///run/containerd/s/7b3c4dd9e59c6c9c18ba403e94ee63bd10aab51ce250ad08a0d7e7156d882873" protocol=ttrpc version=3 Aug 13 00:17:57.347698 sshd[5163]: Connection closed by 10.0.0.1 port 34804 Aug 13 00:17:57.348864 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:57.353997 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:34804.service: Deactivated successfully. Aug 13 00:17:57.356986 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:17:57.357792 systemd-logind[1557]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:17:57.365120 containerd[1579]: time="2025-08-13T00:17:57.365081870Z" level=info msg="StartContainer for \"9a9727b192a33782fc264b526c192289ec3352d1768dedcb949b14bdabcfd89d\" returns successfully" Aug 13 00:17:57.365904 systemd[1]: Started cri-containerd-8a71bd29291b48a1b5556f435b290354f9da8ca9cfbf500e3797eda157868a49.scope - libcontainer container 8a71bd29291b48a1b5556f435b290354f9da8ca9cfbf500e3797eda157868a49. Aug 13 00:17:57.367441 systemd-logind[1557]: Removed session 10. Aug 13 00:17:57.424426 containerd[1579]: time="2025-08-13T00:17:57.424302679Z" level=info msg="StartContainer for \"8a71bd29291b48a1b5556f435b290354f9da8ca9cfbf500e3797eda157868a49\" returns successfully" Aug 13 00:17:57.581034 kubelet[2757]: E0813 00:17:57.580814 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:57.581488 containerd[1579]: time="2025-08-13T00:17:57.581051523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-24855,Uid:6c2d4e97-fe00-43c2-a642-bfd9b285ffd6,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:57.582315 containerd[1579]: time="2025-08-13T00:17:57.582269533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dfrcc,Uid:80719b13-001f-4dd0-926a-1edc3ca41591,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:57.583561 kubelet[2757]: I0813 00:17:57.583497 2757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c80d32a-9cf7-4576-9c40-eec184d75b4d" path="/var/lib/kubelet/pods/6c80d32a-9cf7-4576-9c40-eec184d75b4d/volumes" Aug 13 00:17:57.693904 systemd-networkd[1496]: cali4005bff9389: Gained IPv6LL Aug 13 00:17:57.699242 systemd-networkd[1496]: cali9efa0752acc: Link UP Aug 13 00:17:57.699862 systemd-networkd[1496]: cali9efa0752acc: Gained carrier Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.628 [INFO][5253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84464dd98--24855-eth0 calico-apiserver-84464dd98- calico-apiserver 6c2d4e97-fe00-43c2-a642-bfd9b285ffd6 829 0 2025-08-13 00:17:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84464dd98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84464dd98-24855 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9efa0752acc [] [] }} ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.628 [INFO][5253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.657 [INFO][5277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" HandleID="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Workload="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.657 [INFO][5277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" HandleID="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Workload="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84464dd98-24855", "timestamp":"2025-08-13 00:17:57.657567771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.657 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.657 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.658 [INFO][5277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.664 [INFO][5277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.670 [INFO][5277] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.675 [INFO][5277] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.677 [INFO][5277] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.679 [INFO][5277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.679 [INFO][5277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.680 [INFO][5277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.684 [INFO][5277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.691 [INFO][5277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.691 [INFO][5277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" host="localhost" Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.691 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:57.718172 containerd[1579]: 2025-08-13 00:17:57.691 [INFO][5277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" HandleID="k8s-pod-network.093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Workload="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" Aug 13 00:17:57.718895 containerd[1579]: 2025-08-13 00:17:57.696 [INFO][5253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84464dd98--24855-eth0", GenerateName:"calico-apiserver-84464dd98-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c2d4e97-fe00-43c2-a642-bfd9b285ffd6", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84464dd98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84464dd98-24855", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9efa0752acc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:57.718895 containerd[1579]: 2025-08-13 00:17:57.696 [INFO][5253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" Aug 13 00:17:57.718895 containerd[1579]: 2025-08-13 00:17:57.696 [INFO][5253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9efa0752acc ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" Aug 13 00:17:57.718895 containerd[1579]: 2025-08-13 00:17:57.700 [INFO][5253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" Aug 13 00:17:57.718895 containerd[1579]: 2025-08-13 00:17:57.700 [INFO][5253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84464dd98--24855-eth0", GenerateName:"calico-apiserver-84464dd98-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c2d4e97-fe00-43c2-a642-bfd9b285ffd6", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84464dd98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e", Pod:"calico-apiserver-84464dd98-24855", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9efa0752acc", MAC:"62:5c:7a:fc:55:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:57.718895 containerd[1579]: 2025-08-13 00:17:57.713 [INFO][5253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" Namespace="calico-apiserver" Pod="calico-apiserver-84464dd98-24855" WorkloadEndpoint="localhost-k8s-calico--apiserver--84464dd98--24855-eth0" Aug 13 00:17:57.743476 containerd[1579]: time="2025-08-13T00:17:57.743413633Z" level=info msg="connecting to shim 093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e" address="unix:///run/containerd/s/cac176ab6a541920b5d25e180b2c790cad3a889a06b02047d2af5b374aeb3779" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:57.769907 systemd[1]: Started cri-containerd-093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e.scope - libcontainer container 093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e. Aug 13 00:17:57.789933 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:57.805088 systemd-networkd[1496]: cali432bcbd8995: Link UP Aug 13 00:17:57.805289 systemd-networkd[1496]: cali432bcbd8995: Gained carrier Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.632 [INFO][5254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0 coredns-674b8bbfcf- kube-system 80719b13-001f-4dd0-926a-1edc3ca41591 830 0 2025-08-13 00:17:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dfrcc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali432bcbd8995 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.632 [INFO][5254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.671 [INFO][5283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" HandleID="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Workload="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.671 [INFO][5283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" HandleID="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Workload="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dfrcc", "timestamp":"2025-08-13 00:17:57.671581204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.671 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.691 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.691 [INFO][5283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.765 [INFO][5283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.771 [INFO][5283] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.778 [INFO][5283] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.780 [INFO][5283] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.783 [INFO][5283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.783 [INFO][5283] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.784 [INFO][5283] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1 Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.787 [INFO][5283] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.795 [INFO][5283] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.795 [INFO][5283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" host="localhost" Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.795 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:57.824234 containerd[1579]: 2025-08-13 00:17:57.795 [INFO][5283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" HandleID="k8s-pod-network.9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Workload="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" Aug 13 00:17:57.827074 containerd[1579]: 2025-08-13 00:17:57.799 [INFO][5254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"80719b13-001f-4dd0-926a-1edc3ca41591", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dfrcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali432bcbd8995", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:57.827074 containerd[1579]: 2025-08-13 00:17:57.800 [INFO][5254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" Aug 13 00:17:57.827074 containerd[1579]: 2025-08-13 00:17:57.800 [INFO][5254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali432bcbd8995 ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" Aug 13 00:17:57.827074 containerd[1579]: 2025-08-13 00:17:57.803 [INFO][5254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" Aug 13 00:17:57.827074 containerd[1579]: 2025-08-13 00:17:57.803 [INFO][5254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"80719b13-001f-4dd0-926a-1edc3ca41591", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1", Pod:"coredns-674b8bbfcf-dfrcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali432bcbd8995", MAC:"be:cd:88:37:87:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:57.827074 containerd[1579]: 2025-08-13 00:17:57.817 [INFO][5254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-dfrcc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dfrcc-eth0" Aug 13 00:17:57.837064 containerd[1579]: time="2025-08-13T00:17:57.837020120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84464dd98-24855,Uid:6c2d4e97-fe00-43c2-a642-bfd9b285ffd6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e\"" Aug 13 00:17:57.863825 containerd[1579]: time="2025-08-13T00:17:57.863740165Z" level=info msg="connecting to shim 9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1" address="unix:///run/containerd/s/5fc065f1554cfc638d643509522f1f92af399874a83d312840135c69e321ee30" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:57.895852 systemd[1]: Started cri-containerd-9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1.scope - libcontainer container 9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1. Aug 13 00:17:57.912366 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:57.950850 systemd-networkd[1496]: cali93cdb39c998: Gained IPv6LL Aug 13 00:17:58.065598 containerd[1579]: time="2025-08-13T00:17:58.065536995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dfrcc,Uid:80719b13-001f-4dd0-926a-1edc3ca41591,Namespace:kube-system,Attempt:0,} returns sandbox id \"9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1\"" Aug 13 00:17:58.066371 kubelet[2757]: E0813 00:17:58.066345 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:58.070122 kubelet[2757]: I0813 00:17:58.070071 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b58978f58-xgcj8" podStartSLOduration=2.070052487 podStartE2EDuration="2.070052487s" podCreationTimestamp="2025-08-13 00:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:58.069646952 +0000 UTC m=+58.594196809" watchObservedRunningTime="2025-08-13 00:17:58.070052487 +0000 UTC m=+58.594602344" Aug 13 00:17:58.072978 containerd[1579]: time="2025-08-13T00:17:58.072927363Z" level=info msg="CreateContainer within sandbox \"9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:17:58.078206 systemd-networkd[1496]: cali510a44d77b6: Gained IPv6LL Aug 13 00:17:58.096346 containerd[1579]: time="2025-08-13T00:17:58.096285706Z" level=info msg="Container f3054ab7dc8a98ebd620765d876d37b366da09d3e7351fe4d8bd910cd62e0263: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:17:58.105157 containerd[1579]: time="2025-08-13T00:17:58.105093614Z" level=info msg="CreateContainer within sandbox \"9700943c2a901262629fa88787418268ff7c2d73d2b4f26be2edf40768251bc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3054ab7dc8a98ebd620765d876d37b366da09d3e7351fe4d8bd910cd62e0263\"" Aug 13 00:17:58.105762 containerd[1579]: time="2025-08-13T00:17:58.105738646Z" level=info msg="StartContainer for \"f3054ab7dc8a98ebd620765d876d37b366da09d3e7351fe4d8bd910cd62e0263\"" Aug 13 00:17:58.106772 containerd[1579]: time="2025-08-13T00:17:58.106735071Z" level=info msg="connecting to shim f3054ab7dc8a98ebd620765d876d37b366da09d3e7351fe4d8bd910cd62e0263" address="unix:///run/containerd/s/5fc065f1554cfc638d643509522f1f92af399874a83d312840135c69e321ee30" protocol=ttrpc version=3 Aug 13 00:17:58.131808 systemd[1]: Started cri-containerd-f3054ab7dc8a98ebd620765d876d37b366da09d3e7351fe4d8bd910cd62e0263.scope - libcontainer container f3054ab7dc8a98ebd620765d876d37b366da09d3e7351fe4d8bd910cd62e0263. Aug 13 00:17:58.164777 containerd[1579]: time="2025-08-13T00:17:58.164719978Z" level=info msg="StartContainer for \"f3054ab7dc8a98ebd620765d876d37b366da09d3e7351fe4d8bd910cd62e0263\" returns successfully" Aug 13 00:17:58.580806 containerd[1579]: time="2025-08-13T00:17:58.580720472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b795fc7dc-4z8w4,Uid:decc63a8-ef98-4b0e-8f42-4be53f0c43dd,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:58.872546 kubelet[2757]: E0813 00:17:58.872112 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:58.885877 kubelet[2757]: I0813 00:17:58.884293 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dfrcc" podStartSLOduration=53.884271724 podStartE2EDuration="53.884271724s" podCreationTimestamp="2025-08-13 00:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:58.884146634 +0000 UTC m=+59.408696491" watchObservedRunningTime="2025-08-13 00:17:58.884271724 +0000 UTC m=+59.408821581" Aug 13 00:17:58.946424 systemd-networkd[1496]: calia0109ced605: Link UP Aug 13 00:17:58.948934 systemd-networkd[1496]: calia0109ced605: Gained carrier Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.852 [INFO][5447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0 calico-kube-controllers-b795fc7dc- calico-system decc63a8-ef98-4b0e-8f42-4be53f0c43dd 831 0 2025-08-13 00:17:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b795fc7dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b795fc7dc-4z8w4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia0109ced605 [] [] }} ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.853 [INFO][5447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.892 [INFO][5461] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" HandleID="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Workload="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.893 [INFO][5461] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" HandleID="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Workload="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b795fc7dc-4z8w4", "timestamp":"2025-08-13 00:17:58.892882344 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.893 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.893 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.893 [INFO][5461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.902 [INFO][5461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.914 [INFO][5461] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.920 [INFO][5461] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.922 [INFO][5461] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.925 [INFO][5461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.925 [INFO][5461] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.927 [INFO][5461] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613 Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.933 [INFO][5461] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.940 [INFO][5461] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.940 [INFO][5461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" host="localhost" Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.940 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:58.970732 containerd[1579]: 2025-08-13 00:17:58.940 [INFO][5461] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" HandleID="k8s-pod-network.716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Workload="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" Aug 13 00:17:58.971451 containerd[1579]: 2025-08-13 00:17:58.944 [INFO][5447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0", GenerateName:"calico-kube-controllers-b795fc7dc-", Namespace:"calico-system", SelfLink:"", UID:"decc63a8-ef98-4b0e-8f42-4be53f0c43dd", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b795fc7dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b795fc7dc-4z8w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0109ced605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:58.971451 containerd[1579]: 2025-08-13 00:17:58.944 [INFO][5447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" Aug 13 00:17:58.971451 containerd[1579]: 2025-08-13 00:17:58.944 [INFO][5447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0109ced605 ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" Aug 13 00:17:58.971451 containerd[1579]: 2025-08-13 00:17:58.946 [INFO][5447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" Aug 13 00:17:58.971451 containerd[1579]: 2025-08-13 00:17:58.948 [INFO][5447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0", GenerateName:"calico-kube-controllers-b795fc7dc-", Namespace:"calico-system", SelfLink:"", UID:"decc63a8-ef98-4b0e-8f42-4be53f0c43dd", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b795fc7dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613", Pod:"calico-kube-controllers-b795fc7dc-4z8w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0109ced605", MAC:"1e:c9:49:7b:cb:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:58.971451 containerd[1579]: 2025-08-13 00:17:58.963 [INFO][5447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" Namespace="calico-system" Pod="calico-kube-controllers-b795fc7dc-4z8w4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b795fc7dc--4z8w4-eth0" Aug 13 00:17:59.038817 systemd-networkd[1496]: cali9efa0752acc: Gained IPv6LL Aug 13 00:17:59.103601 systemd-networkd[1496]: cali432bcbd8995: Gained IPv6LL Aug 13 00:17:59.566608 containerd[1579]: time="2025-08-13T00:17:59.566562582Z" level=info msg="StopPodSandbox for \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\"" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.603 [WARNING][5494] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.603 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.603 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" iface="eth0" netns="" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.603 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.603 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.671 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.671 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.671 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.687 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.687 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.688 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:59.699001 containerd[1579]: 2025-08-13 00:17:59.691 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.700075 containerd[1579]: time="2025-08-13T00:17:59.699027830Z" level=info msg="TearDown network for sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" successfully" Aug 13 00:17:59.700075 containerd[1579]: time="2025-08-13T00:17:59.699050603Z" level=info msg="StopPodSandbox for \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" returns successfully" Aug 13 00:17:59.700075 containerd[1579]: time="2025-08-13T00:17:59.699505664Z" level=info msg="RemovePodSandbox for \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\"" Aug 13 00:17:59.706066 containerd[1579]: time="2025-08-13T00:17:59.706027725Z" level=info msg="Forcibly stopping sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\"" Aug 13 00:17:59.736174 kubelet[2757]: E0813 00:17:59.736112 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:59.754106 containerd[1579]: time="2025-08-13T00:17:59.753864915Z" level=info msg="connecting to shim 716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613" address="unix:///run/containerd/s/a9c261a363bcd44e01bb7a1183ba8db555430e1195148676c2df199313d96a5c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:17:59.789880 systemd[1]: Started cri-containerd-716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613.scope - libcontainer container 716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613. Aug 13 00:17:59.818989 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.767 [WARNING][5522] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" WorkloadEndpoint="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.768 [INFO][5522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.768 [INFO][5522] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" iface="eth0" netns="" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.768 [INFO][5522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.768 [INFO][5522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.809 [INFO][5560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.810 [INFO][5560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.810 [INFO][5560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.824 [WARNING][5560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.825 [INFO][5560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" HandleID="k8s-pod-network.5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Workload="localhost-k8s-whisker--8489667c94--t6zfc-eth0" Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.826 [INFO][5560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:59.833053 containerd[1579]: 2025-08-13 00:17:59.829 [INFO][5522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2" Aug 13 00:17:59.833988 containerd[1579]: time="2025-08-13T00:17:59.833645691Z" level=info msg="TearDown network for sandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" successfully" Aug 13 00:17:59.840371 containerd[1579]: time="2025-08-13T00:17:59.840338769Z" level=info msg="Ensure that sandbox 5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2 in task-service has been cleanup successfully" Aug 13 00:17:59.874695 kubelet[2757]: E0813 00:17:59.874551 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:18:00.031263 containerd[1579]: time="2025-08-13T00:18:00.031216216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b795fc7dc-4z8w4,Uid:decc63a8-ef98-4b0e-8f42-4be53f0c43dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613\"" Aug 13 00:18:00.150019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173495887.mount: Deactivated successfully. Aug 13 00:18:00.166721 containerd[1579]: time="2025-08-13T00:18:00.166479122Z" level=info msg="RemovePodSandbox \"5392ac2d007a4ff3100f66f8d2cbc74ae58a0174f21ef4feb338ced37dfd50d2\" returns successfully" Aug 13 00:18:00.573964 systemd-networkd[1496]: calia0109ced605: Gained IPv6LL Aug 13 00:18:00.878966 kubelet[2757]: E0813 00:18:00.878503 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:18:01.880593 kubelet[2757]: E0813 00:18:01.880536 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:18:02.378555 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:60776.service - OpenSSH per-connection server daemon (10.0.0.1:60776). Aug 13 00:18:02.543100 sshd[5600]: Accepted publickey for core from 10.0.0.1 port 60776 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:02.548734 sshd-session[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:02.554695 systemd-logind[1557]: New session 11 of user core. Aug 13 00:18:02.565883 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:18:02.713995 sshd[5606]: Connection closed by 10.0.0.1 port 60776 Aug 13 00:18:02.714267 sshd-session[5600]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:02.719469 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:60776.service: Deactivated successfully. Aug 13 00:18:02.721995 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:18:02.722919 systemd-logind[1557]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:18:02.724258 systemd-logind[1557]: Removed session 11. Aug 13 00:18:03.636940 containerd[1579]: time="2025-08-13T00:18:03.636863044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:03.652514 containerd[1579]: time="2025-08-13T00:18:03.652444029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 00:18:03.662064 containerd[1579]: time="2025-08-13T00:18:03.661977590Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:03.677276 containerd[1579]: time="2025-08-13T00:18:03.677216662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:03.677945 containerd[1579]: time="2025-08-13T00:18:03.677906387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.41414135s" Aug 13 00:18:03.677945 containerd[1579]: time="2025-08-13T00:18:03.677943007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:18:03.679303 containerd[1579]: time="2025-08-13T00:18:03.679221035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:18:03.688509 containerd[1579]: time="2025-08-13T00:18:03.688444805Z" level=info msg="CreateContainer within sandbox \"721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:18:03.711993 containerd[1579]: time="2025-08-13T00:18:03.711930094Z" level=info msg="Container b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:18:03.725106 containerd[1579]: time="2025-08-13T00:18:03.725039074Z" level=info msg="CreateContainer within sandbox \"721ecea38e7528efe1188a98c7f1f6c8d0a40c8123f30bcefefbb1c45d3c4950\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575\"" Aug 13 00:18:03.725923 containerd[1579]: time="2025-08-13T00:18:03.725881791Z" level=info msg="StartContainer for \"b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575\"" Aug 13 00:18:03.727344 containerd[1579]: time="2025-08-13T00:18:03.727313792Z" level=info msg="connecting to shim b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575" address="unix:///run/containerd/s/d2229f4e07286c8a71a1f8d9775654e63a69091ce786ec7543282bb60ea6e486" protocol=ttrpc version=3 Aug 13 00:18:03.783837 systemd[1]: Started cri-containerd-b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575.scope - libcontainer container b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575. Aug 13 00:18:03.941179 containerd[1579]: time="2025-08-13T00:18:03.941059666Z" level=info msg="StartContainer for \"b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575\" returns successfully" Aug 13 00:18:05.039281 containerd[1579]: time="2025-08-13T00:18:05.039213424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575\" id:\"a1f0d340b2433aa68551de168dcbcb0370ea6cca60c209c2a9a9aa16f9d1f90f\" pid:5668 exit_status:1 exited_at:{seconds:1755044285 nanos:38710386}" Aug 13 00:18:06.050393 containerd[1579]: time="2025-08-13T00:18:06.050340578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575\" id:\"5b8216615e7bdf73dd0312db655aa0c8f79dcb3058f72bb65231fda234050d60\" pid:5697 exit_status:1 exited_at:{seconds:1755044286 nanos:49989801}" Aug 13 00:18:07.577444 containerd[1579]: time="2025-08-13T00:18:07.577377968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:07.578201 containerd[1579]: time="2025-08-13T00:18:07.578157261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 00:18:07.579323 containerd[1579]: time="2025-08-13T00:18:07.579292242Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:07.581948 containerd[1579]: time="2025-08-13T00:18:07.581917441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:07.582648 containerd[1579]: time="2025-08-13T00:18:07.582595141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.903336395s" Aug 13 00:18:07.582648 containerd[1579]: time="2025-08-13T00:18:07.582630257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:18:07.583693 containerd[1579]: time="2025-08-13T00:18:07.583641103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:18:07.587198 containerd[1579]: time="2025-08-13T00:18:07.587155663Z" level=info msg="CreateContainer within sandbox \"7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:18:07.596144 containerd[1579]: time="2025-08-13T00:18:07.596092780Z" level=info msg="Container ad229aa428104d471dfb8327c7d99c99309c22d365a92b0b73f037522efdd8a4: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:18:07.605645 containerd[1579]: time="2025-08-13T00:18:07.605596634Z" level=info msg="CreateContainer within sandbox \"7a39d5bb63653683a7abfcac0b0b2e6b7cecd7bc12339a0929e1dddec3c5d684\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ad229aa428104d471dfb8327c7d99c99309c22d365a92b0b73f037522efdd8a4\"" Aug 13 00:18:07.606211 containerd[1579]: time="2025-08-13T00:18:07.606150168Z" level=info msg="StartContainer for \"ad229aa428104d471dfb8327c7d99c99309c22d365a92b0b73f037522efdd8a4\"" Aug 13 00:18:07.607523 containerd[1579]: time="2025-08-13T00:18:07.607491352Z" level=info msg="connecting to shim ad229aa428104d471dfb8327c7d99c99309c22d365a92b0b73f037522efdd8a4" address="unix:///run/containerd/s/b44c5d437ebbeb8c085ff87fd8f508db5d8c0bbdec91984c47e096b3631f12e2" protocol=ttrpc version=3 Aug 13 00:18:07.642828 systemd[1]: Started cri-containerd-ad229aa428104d471dfb8327c7d99c99309c22d365a92b0b73f037522efdd8a4.scope - libcontainer container ad229aa428104d471dfb8327c7d99c99309c22d365a92b0b73f037522efdd8a4. Aug 13 00:18:07.697421 containerd[1579]: time="2025-08-13T00:18:07.697349056Z" level=info msg="StartContainer for \"ad229aa428104d471dfb8327c7d99c99309c22d365a92b0b73f037522efdd8a4\" returns successfully" Aug 13 00:18:07.733808 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:60780.service - OpenSSH per-connection server daemon (10.0.0.1:60780). Aug 13 00:18:07.793423 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 60780 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:07.795202 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:07.800902 systemd-logind[1557]: New session 12 of user core. Aug 13 00:18:07.808911 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:18:07.950496 sshd[5758]: Connection closed by 10.0.0.1 port 60780 Aug 13 00:18:07.952142 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:07.962528 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:60780.service: Deactivated successfully. Aug 13 00:18:07.964618 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:18:07.965541 systemd-logind[1557]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:18:07.968767 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:60792.service - OpenSSH per-connection server daemon (10.0.0.1:60792). Aug 13 00:18:07.970447 systemd-logind[1557]: Removed session 12. Aug 13 00:18:08.024268 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 60792 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:08.025755 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:08.030258 systemd-logind[1557]: New session 13 of user core. Aug 13 00:18:08.036800 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:18:08.041635 kubelet[2757]: I0813 00:18:08.041553 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-94mpd" podStartSLOduration=43.587057454 podStartE2EDuration="51.04153063s" podCreationTimestamp="2025-08-13 00:17:17 +0000 UTC" firstStartedPulling="2025-08-13 00:17:56.22456995 +0000 UTC m=+56.749119807" lastFinishedPulling="2025-08-13 00:18:03.679043125 +0000 UTC m=+64.203592983" observedRunningTime="2025-08-13 00:18:04.978729826 +0000 UTC m=+65.503279683" watchObservedRunningTime="2025-08-13 00:18:08.04153063 +0000 UTC m=+68.566080487" Aug 13 00:18:08.542714 sshd[5776]: Connection closed by 10.0.0.1 port 60792 Aug 13 00:18:08.544264 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:08.557981 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:60792.service: Deactivated successfully. Aug 13 00:18:08.562443 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:18:08.563699 systemd-logind[1557]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:18:08.569427 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:60802.service - OpenSSH per-connection server daemon (10.0.0.1:60802). Aug 13 00:18:08.570855 systemd-logind[1557]: Removed session 13. Aug 13 00:18:08.624122 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 60802 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:08.625984 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:08.633505 systemd-logind[1557]: New session 14 of user core. Aug 13 00:18:08.645874 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:18:08.831954 sshd[5792]: Connection closed by 10.0.0.1 port 60802 Aug 13 00:18:08.832201 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:08.836453 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:60802.service: Deactivated successfully. Aug 13 00:18:08.838718 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:18:08.839736 systemd-logind[1557]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:18:08.841145 systemd-logind[1557]: Removed session 14. Aug 13 00:18:08.980855 kubelet[2757]: I0813 00:18:08.980804 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:18:10.888604 containerd[1579]: time="2025-08-13T00:18:10.888534902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:10.889337 containerd[1579]: time="2025-08-13T00:18:10.889302922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 00:18:10.890580 containerd[1579]: time="2025-08-13T00:18:10.890532290Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:10.892452 containerd[1579]: time="2025-08-13T00:18:10.892411454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:10.892971 containerd[1579]: time="2025-08-13T00:18:10.892936873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.309240967s" Aug 13 00:18:10.893008 containerd[1579]: time="2025-08-13T00:18:10.892969445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:18:10.893981 containerd[1579]: time="2025-08-13T00:18:10.893936484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:18:10.898495 containerd[1579]: time="2025-08-13T00:18:10.898462042Z" level=info msg="CreateContainer within sandbox \"c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:18:10.908798 containerd[1579]: time="2025-08-13T00:18:10.908764285Z" level=info msg="Container 70a03cbe3e1a493976d0194cb5e731490016b11a5134ed354ccf168bff013ee1: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:18:10.917274 containerd[1579]: time="2025-08-13T00:18:10.917235525Z" level=info msg="CreateContainer within sandbox \"c690cf544f0972cc7db79ec4eb1008bdf3b304c22d81947ab0dc0b56d73f5d4b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"70a03cbe3e1a493976d0194cb5e731490016b11a5134ed354ccf168bff013ee1\"" Aug 13 00:18:10.917900 containerd[1579]: time="2025-08-13T00:18:10.917869651Z" level=info msg="StartContainer for \"70a03cbe3e1a493976d0194cb5e731490016b11a5134ed354ccf168bff013ee1\"" Aug 13 00:18:10.919266 containerd[1579]: time="2025-08-13T00:18:10.919233695Z" level=info msg="connecting to shim 70a03cbe3e1a493976d0194cb5e731490016b11a5134ed354ccf168bff013ee1" address="unix:///run/containerd/s/7b3c4dd9e59c6c9c18ba403e94ee63bd10aab51ce250ad08a0d7e7156d882873" protocol=ttrpc version=3 Aug 13 00:18:10.956950 systemd[1]: Started cri-containerd-70a03cbe3e1a493976d0194cb5e731490016b11a5134ed354ccf168bff013ee1.scope - libcontainer container 70a03cbe3e1a493976d0194cb5e731490016b11a5134ed354ccf168bff013ee1. Aug 13 00:18:11.179606 containerd[1579]: time="2025-08-13T00:18:11.179485643Z" level=info msg="StartContainer for \"70a03cbe3e1a493976d0194cb5e731490016b11a5134ed354ccf168bff013ee1\" returns successfully" Aug 13 00:18:11.660294 kubelet[2757]: I0813 00:18:11.660260 2757 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:18:11.674094 kubelet[2757]: I0813 00:18:11.674041 2757 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:18:12.250999 kubelet[2757]: I0813 00:18:12.250893 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84464dd98-fq74k" podStartSLOduration=46.888805068 podStartE2EDuration="57.250824589s" podCreationTimestamp="2025-08-13 00:17:15 +0000 UTC" firstStartedPulling="2025-08-13 00:17:57.22151585 +0000 UTC m=+57.746065707" lastFinishedPulling="2025-08-13 00:18:07.583535371 +0000 UTC m=+68.108085228" observedRunningTime="2025-08-13 00:18:08.043720237 +0000 UTC m=+68.568270094" watchObservedRunningTime="2025-08-13 00:18:12.250824589 +0000 UTC m=+72.775374436" Aug 13 00:18:12.590448 containerd[1579]: time="2025-08-13T00:18:12.590380193Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:12.624352 containerd[1579]: time="2025-08-13T00:18:12.624291522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:18:12.626993 containerd[1579]: time="2025-08-13T00:18:12.626925288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.732946302s" Aug 13 00:18:12.627133 containerd[1579]: time="2025-08-13T00:18:12.626998117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:18:12.628359 containerd[1579]: time="2025-08-13T00:18:12.628095062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:18:12.733993 containerd[1579]: time="2025-08-13T00:18:12.733931938Z" level=info msg="CreateContainer within sandbox \"093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:18:13.360038 containerd[1579]: time="2025-08-13T00:18:13.359965661Z" level=info msg="Container 319463ca69923f397d9f79bf73393b982b22dd82002aee690e2b1f596c1f6beb: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:18:13.586121 containerd[1579]: time="2025-08-13T00:18:13.586064788Z" level=info msg="CreateContainer within sandbox \"093879a77c9a91319d83c293ac56d6b53f0fb6ef6f5bcfc80667f3f2be8c3d8e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"319463ca69923f397d9f79bf73393b982b22dd82002aee690e2b1f596c1f6beb\"" Aug 13 00:18:13.586552 containerd[1579]: time="2025-08-13T00:18:13.586513020Z" level=info msg="StartContainer for \"319463ca69923f397d9f79bf73393b982b22dd82002aee690e2b1f596c1f6beb\"" Aug 13 00:18:13.587928 containerd[1579]: time="2025-08-13T00:18:13.587894674Z" level=info msg="connecting to shim 319463ca69923f397d9f79bf73393b982b22dd82002aee690e2b1f596c1f6beb" address="unix:///run/containerd/s/cac176ab6a541920b5d25e180b2c790cad3a889a06b02047d2af5b374aeb3779" protocol=ttrpc version=3 Aug 13 00:18:13.609854 systemd[1]: Started cri-containerd-319463ca69923f397d9f79bf73393b982b22dd82002aee690e2b1f596c1f6beb.scope - libcontainer container 319463ca69923f397d9f79bf73393b982b22dd82002aee690e2b1f596c1f6beb. Aug 13 00:18:13.851404 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:41526.service - OpenSSH per-connection server daemon (10.0.0.1:41526). Aug 13 00:18:13.951328 sshd[5889]: Accepted publickey for core from 10.0.0.1 port 41526 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:13.952906 sshd-session[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:13.957644 systemd-logind[1557]: New session 15 of user core. Aug 13 00:18:13.964789 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:18:14.092227 sshd[5891]: Connection closed by 10.0.0.1 port 41526 Aug 13 00:18:14.092546 sshd-session[5889]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:14.096750 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:41526.service: Deactivated successfully. Aug 13 00:18:14.100064 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:18:14.101097 systemd-logind[1557]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:18:14.102992 systemd-logind[1557]: Removed session 15. Aug 13 00:18:14.257772 containerd[1579]: time="2025-08-13T00:18:14.257646276Z" level=info msg="StartContainer for \"319463ca69923f397d9f79bf73393b982b22dd82002aee690e2b1f596c1f6beb\" returns successfully" Aug 13 00:18:15.404522 kubelet[2757]: I0813 00:18:15.404373 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wkrpf" podStartSLOduration=42.407859556 podStartE2EDuration="58.404354762s" podCreationTimestamp="2025-08-13 00:17:17 +0000 UTC" firstStartedPulling="2025-08-13 00:17:54.897260254 +0000 UTC m=+55.421810111" lastFinishedPulling="2025-08-13 00:18:10.89375546 +0000 UTC m=+71.418305317" observedRunningTime="2025-08-13 00:18:12.250597206 +0000 UTC m=+72.775147073" watchObservedRunningTime="2025-08-13 00:18:15.404354762 +0000 UTC m=+75.928904620" Aug 13 00:18:15.405715 kubelet[2757]: I0813 00:18:15.405248 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84464dd98-24855" podStartSLOduration=45.619600204 podStartE2EDuration="1m0.405239132s" podCreationTimestamp="2025-08-13 00:17:15 +0000 UTC" firstStartedPulling="2025-08-13 00:17:57.842224644 +0000 UTC m=+58.366774501" lastFinishedPulling="2025-08-13 00:18:12.627863552 +0000 UTC m=+73.152413429" observedRunningTime="2025-08-13 00:18:15.404686282 +0000 UTC m=+75.929236159" watchObservedRunningTime="2025-08-13 00:18:15.405239132 +0000 UTC m=+75.929788999" Aug 13 00:18:16.070140 containerd[1579]: time="2025-08-13T00:18:16.070094218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27\" id:\"98368d04d7274708af4892ec7e16a070e71d2e5f586c4b5e4448019c8a5ec3f3\" pid:5922 exit_status:1 exited_at:{seconds:1755044296 nanos:69729166}" Aug 13 00:18:16.580416 kubelet[2757]: E0813 00:18:16.580363 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:18:17.580974 kubelet[2757]: E0813 00:18:17.580910 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:18:19.106409 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:41536.service - OpenSSH per-connection server daemon (10.0.0.1:41536). Aug 13 00:18:19.982062 kubelet[2757]: I0813 00:18:19.982012 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:18:20.133199 sshd[5946]: Accepted publickey for core from 10.0.0.1 port 41536 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:20.155145 sshd-session[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:20.159905 systemd-logind[1557]: New session 16 of user core. Aug 13 00:18:20.169823 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:18:20.356591 containerd[1579]: time="2025-08-13T00:18:20.356527726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:20.406032 containerd[1579]: time="2025-08-13T00:18:20.405956176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 00:18:20.444032 sshd[5950]: Connection closed by 10.0.0.1 port 41536 Aug 13 00:18:20.444331 sshd-session[5946]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:20.448020 containerd[1579]: time="2025-08-13T00:18:20.447866010Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:20.453296 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:41536.service: Deactivated successfully. Aug 13 00:18:20.455708 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:18:20.456866 systemd-logind[1557]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:18:20.458600 systemd-logind[1557]: Removed session 16. Aug 13 00:18:20.472371 containerd[1579]: time="2025-08-13T00:18:20.472307443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:20.472982 containerd[1579]: time="2025-08-13T00:18:20.472946364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 7.84479696s" Aug 13 00:18:20.473056 containerd[1579]: time="2025-08-13T00:18:20.472983695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:18:20.571688 containerd[1579]: time="2025-08-13T00:18:20.570751901Z" level=info msg="CreateContainer within sandbox \"716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:18:20.877831 containerd[1579]: time="2025-08-13T00:18:20.877093266Z" level=info msg="Container 4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:18:21.234892 containerd[1579]: time="2025-08-13T00:18:21.234735279Z" level=info msg="CreateContainer within sandbox \"716d4385039582a305fca3b4097067563e557061df691eefb2adbf375da39613\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43\"" Aug 13 00:18:21.235701 containerd[1579]: time="2025-08-13T00:18:21.235645505Z" level=info msg="StartContainer for \"4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43\"" Aug 13 00:18:21.237252 containerd[1579]: time="2025-08-13T00:18:21.237217715Z" level=info msg="connecting to shim 4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43" address="unix:///run/containerd/s/a9c261a363bcd44e01bb7a1183ba8db555430e1195148676c2df199313d96a5c" protocol=ttrpc version=3 Aug 13 00:18:21.261831 systemd[1]: Started cri-containerd-4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43.scope - libcontainer container 4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43. Aug 13 00:18:21.451744 containerd[1579]: time="2025-08-13T00:18:21.451695116Z" level=info msg="StartContainer for \"4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43\" returns successfully" Aug 13 00:18:22.330905 containerd[1579]: time="2025-08-13T00:18:22.330855271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43\" id:\"66e2cb1d716da256c5faccbc2e25d20f2100a310d5e8c6fada1e13315e725cc7\" pid:6023 exited_at:{seconds:1755044302 nanos:330449452}" Aug 13 00:18:22.347647 kubelet[2757]: I0813 00:18:22.347556 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b795fc7dc-4z8w4" podStartSLOduration=44.905644232 podStartE2EDuration="1m5.347539931s" podCreationTimestamp="2025-08-13 00:17:17 +0000 UTC" firstStartedPulling="2025-08-13 00:18:00.032971548 +0000 UTC m=+60.557521405" lastFinishedPulling="2025-08-13 00:18:20.474867247 +0000 UTC m=+80.999417104" observedRunningTime="2025-08-13 00:18:22.34745503 +0000 UTC m=+82.872004887" watchObservedRunningTime="2025-08-13 00:18:22.347539931 +0000 UTC m=+82.872089788" Aug 13 00:18:25.462808 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:44770.service - OpenSSH per-connection server daemon (10.0.0.1:44770). Aug 13 00:18:25.621272 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 44770 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:25.623609 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:25.628535 systemd-logind[1557]: New session 17 of user core. Aug 13 00:18:25.639836 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:18:25.775968 sshd[6036]: Connection closed by 10.0.0.1 port 44770 Aug 13 00:18:25.776274 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:25.781831 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:44770.service: Deactivated successfully. Aug 13 00:18:25.784177 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:18:25.785080 systemd-logind[1557]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:18:25.786797 systemd-logind[1557]: Removed session 17. Aug 13 00:18:27.580888 kubelet[2757]: E0813 00:18:27.580823 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:18:30.794309 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:53850.service - OpenSSH per-connection server daemon (10.0.0.1:53850). Aug 13 00:18:30.861369 sshd[6056]: Accepted publickey for core from 10.0.0.1 port 53850 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:30.863003 sshd-session[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:30.868025 systemd-logind[1557]: New session 18 of user core. Aug 13 00:18:30.877814 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:18:30.998697 sshd[6058]: Connection closed by 10.0.0.1 port 53850 Aug 13 00:18:30.999060 sshd-session[6056]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:31.003988 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:53850.service: Deactivated successfully. Aug 13 00:18:31.006440 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:18:31.007692 systemd-logind[1557]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:18:31.009217 systemd-logind[1557]: Removed session 18. Aug 13 00:18:36.016287 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:53852.service - OpenSSH per-connection server daemon (10.0.0.1:53852). Aug 13 00:18:36.124454 containerd[1579]: time="2025-08-13T00:18:36.124403518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575\" id:\"8475acbb3e46bd5dd9ba9517dd743750209602cba4a316782e574132a47ad33c\" pid:6083 exited_at:{seconds:1755044316 nanos:124091148}" Aug 13 00:18:36.132495 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 53852 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:36.134352 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:36.138871 systemd-logind[1557]: New session 19 of user core. Aug 13 00:18:36.148800 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:18:36.383528 sshd[6099]: Connection closed by 10.0.0.1 port 53852 Aug 13 00:18:36.383848 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:36.393677 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:53852.service: Deactivated successfully. Aug 13 00:18:36.395504 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:18:36.396377 systemd-logind[1557]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:18:36.399095 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:53856.service - OpenSSH per-connection server daemon (10.0.0.1:53856). Aug 13 00:18:36.400098 systemd-logind[1557]: Removed session 19. Aug 13 00:18:36.455933 sshd[6114]: Accepted publickey for core from 10.0.0.1 port 53856 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:36.457312 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:36.461886 systemd-logind[1557]: New session 20 of user core. Aug 13 00:18:36.474787 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:18:36.580272 kubelet[2757]: E0813 00:18:36.580237 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:18:38.087383 sshd[6116]: Connection closed by 10.0.0.1 port 53856 Aug 13 00:18:38.087803 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:38.100523 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:53856.service: Deactivated successfully. Aug 13 00:18:38.102792 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:18:38.103649 systemd-logind[1557]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:18:38.107182 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Aug 13 00:18:38.108001 systemd-logind[1557]: Removed session 20. Aug 13 00:18:38.166869 sshd[6128]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:38.168635 sshd-session[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:38.173543 systemd-logind[1557]: New session 21 of user core. Aug 13 00:18:38.190814 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:18:38.730419 sshd[6130]: Connection closed by 10.0.0.1 port 53862 Aug 13 00:18:38.730946 sshd-session[6128]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:38.741746 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:53862.service: Deactivated successfully. Aug 13 00:18:38.746355 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:18:38.748366 systemd-logind[1557]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:18:38.755950 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:53866.service - OpenSSH per-connection server daemon (10.0.0.1:53866). Aug 13 00:18:38.758797 systemd-logind[1557]: Removed session 21. Aug 13 00:18:38.821848 sshd[6148]: Accepted publickey for core from 10.0.0.1 port 53866 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:38.823332 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:38.827876 systemd-logind[1557]: New session 22 of user core. Aug 13 00:18:38.841806 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:18:39.166868 sshd[6151]: Connection closed by 10.0.0.1 port 53866 Aug 13 00:18:39.167897 sshd-session[6148]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:39.182215 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:53866.service: Deactivated successfully. Aug 13 00:18:39.185048 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:18:39.186334 systemd-logind[1557]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:18:39.191052 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:53874.service - OpenSSH per-connection server daemon (10.0.0.1:53874). Aug 13 00:18:39.192583 systemd-logind[1557]: Removed session 22. Aug 13 00:18:39.244630 sshd[6164]: Accepted publickey for core from 10.0.0.1 port 53874 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:39.246211 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:39.251194 systemd-logind[1557]: New session 23 of user core. Aug 13 00:18:39.260793 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:18:39.389804 sshd[6166]: Connection closed by 10.0.0.1 port 53874 Aug 13 00:18:39.390181 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:39.395235 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:53874.service: Deactivated successfully. Aug 13 00:18:39.397725 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:18:39.398572 systemd-logind[1557]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:18:39.400381 systemd-logind[1557]: Removed session 23. Aug 13 00:18:44.405849 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:56846.service - OpenSSH per-connection server daemon (10.0.0.1:56846). Aug 13 00:18:44.458676 sshd[6180]: Accepted publickey for core from 10.0.0.1 port 56846 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:44.460040 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:44.464227 systemd-logind[1557]: New session 24 of user core. Aug 13 00:18:44.470780 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:18:44.575165 sshd[6182]: Connection closed by 10.0.0.1 port 56846 Aug 13 00:18:44.575451 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:44.579552 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:56846.service: Deactivated successfully. Aug 13 00:18:44.581473 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:18:44.582310 systemd-logind[1557]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:18:44.583541 systemd-logind[1557]: Removed session 24. Aug 13 00:18:45.825031 containerd[1579]: time="2025-08-13T00:18:45.824972240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4d655c830e59076316cb1c5b2d02b49da0df003276f9ccc29657d0226ae8b27\" id:\"f4ba5b49ea8c3612a694d9f1c5e6cac42a5a24dd9a35f015ee7e498282012dd9\" pid:6206 exited_at:{seconds:1755044325 nanos:824492043}" Aug 13 00:18:49.587839 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:56854.service - OpenSSH per-connection server daemon (10.0.0.1:56854). Aug 13 00:18:49.645987 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 56854 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:49.647608 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:49.652591 systemd-logind[1557]: New session 25 of user core. Aug 13 00:18:49.657808 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:18:49.779811 sshd[6224]: Connection closed by 10.0.0.1 port 56854 Aug 13 00:18:49.780111 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:49.784127 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:56854.service: Deactivated successfully. Aug 13 00:18:49.786408 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:18:49.787310 systemd-logind[1557]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:18:49.788607 systemd-logind[1557]: Removed session 25. Aug 13 00:18:52.355132 containerd[1579]: time="2025-08-13T00:18:52.354897341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a4ca9769682c75db83b82189b28f7066ade2c57f1a421bddd5768411e60ea43\" id:\"087d42dd110afeedfc0c5fc090d82c8a022da34ebe0a2ea792768219bf74386c\" pid:6249 exited_at:{seconds:1755044332 nanos:354390865}" Aug 13 00:18:54.793527 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:38214.service - OpenSSH per-connection server daemon (10.0.0.1:38214). Aug 13 00:18:54.853053 sshd[6262]: Accepted publickey for core from 10.0.0.1 port 38214 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:18:54.854987 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:54.861439 systemd-logind[1557]: New session 26 of user core. Aug 13 00:18:54.868818 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:18:55.199282 sshd[6264]: Connection closed by 10.0.0.1 port 38214 Aug 13 00:18:55.200086 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:55.204991 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:38214.service: Deactivated successfully. Aug 13 00:18:55.208223 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:18:55.210797 systemd-logind[1557]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:18:55.211982 systemd-logind[1557]: Removed session 26. Aug 13 00:19:00.218307 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:60018.service - OpenSSH per-connection server daemon (10.0.0.1:60018). Aug 13 00:19:00.283750 sshd[6280]: Accepted publickey for core from 10.0.0.1 port 60018 ssh2: RSA SHA256:i8oP4rUeUqa+iJxjGAz+rh6edUZA7KblOXRO11ln3GA Aug 13 00:19:00.287273 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:00.304748 systemd-logind[1557]: New session 27 of user core. Aug 13 00:19:00.308936 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:19:00.479893 sshd[6282]: Connection closed by 10.0.0.1 port 60018 Aug 13 00:19:00.480164 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:00.485319 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:60018.service: Deactivated successfully. Aug 13 00:19:00.488108 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:19:00.490239 systemd-logind[1557]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:19:00.491541 systemd-logind[1557]: Removed session 27. Aug 13 00:19:00.580082 kubelet[2757]: E0813 00:19:00.580036 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:19:00.705272 containerd[1579]: time="2025-08-13T00:19:00.705213355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4423c5b78f0b27f54bd147377af1f4e0a42d331deb6361e1a67d9e253188575\" id:\"3c4fcfb2cf2d3662b19d36510ebab3095b367966fedf6cae8a792a3fb9df26d0\" pid:6308 exited_at:{seconds:1755044340 nanos:704829591}"