Sep 5 00:25:10.873804 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:12:48 -00 2025 Sep 5 00:25:10.873838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:25:10.873855 kernel: BIOS-provided physical RAM map: Sep 5 00:25:10.873863 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:25:10.873872 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:25:10.873880 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:25:10.873932 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:25:10.873940 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:25:10.873950 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 5 00:25:10.873962 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 5 00:25:10.873969 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 5 00:25:10.873978 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 5 00:25:10.873986 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 5 00:25:10.873997 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 5 00:25:10.874009 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 5 00:25:10.874024 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:25:10.874037 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 5 00:25:10.874046 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 5 00:25:10.874055 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 5 00:25:10.874064 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 5 00:25:10.874073 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 5 00:25:10.874082 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:25:10.874092 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:25:10.874101 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:25:10.874109 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 5 00:25:10.874123 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:25:10.874133 kernel: NX (Execute Disable) protection: active Sep 5 00:25:10.874142 kernel: APIC: Static calls initialized Sep 5 00:25:10.874151 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 5 00:25:10.874161 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 5 00:25:10.874171 kernel: extended physical RAM map: Sep 5 00:25:10.874180 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:25:10.874189 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:25:10.874198 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:25:10.874207 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:25:10.874217 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:25:10.874231 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 5 00:25:10.874241 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 5 00:25:10.874250 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 5 00:25:10.874259 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 5 00:25:10.874275 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 5 00:25:10.874285 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 5 00:25:10.874297 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 5 00:25:10.874307 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 5 00:25:10.874317 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 5 00:25:10.874327 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 5 00:25:10.874337 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 5 00:25:10.874347 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:25:10.874357 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 5 00:25:10.874367 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 5 00:25:10.874376 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 5 00:25:10.874386 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 5 00:25:10.874399 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 5 00:25:10.874409 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:25:10.874419 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:25:10.874429 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:25:10.874439 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 5 00:25:10.874449 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:25:10.874463 kernel: efi: EFI v2.7 by EDK II Sep 5 00:25:10.874474 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 5 00:25:10.874484 kernel: random: crng init done Sep 5 00:25:10.874497 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 5 00:25:10.874507 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 5 00:25:10.874523 kernel: secureboot: Secure boot disabled Sep 5 00:25:10.874533 kernel: SMBIOS 2.8 present. Sep 5 00:25:10.874543 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 5 00:25:10.874553 kernel: DMI: Memory slots populated: 1/1 Sep 5 00:25:10.874563 kernel: Hypervisor detected: KVM Sep 5 00:25:10.874573 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:25:10.874583 kernel: kvm-clock: using sched offset of 4892072999 cycles Sep 5 00:25:10.874593 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:25:10.874626 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:25:10.874639 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:25:10.874650 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:25:10.874664 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 5 00:25:10.874675 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 5 00:25:10.874691 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:25:10.874701 kernel: Using GB pages for direct mapping Sep 5 00:25:10.874711 kernel: ACPI: Early table checksum verification disabled Sep 5 00:25:10.874722 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 5 00:25:10.874732 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:25:10.874743 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:25:10.874753 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:25:10.874768 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 5 00:25:10.874779 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:25:10.874789 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:25:10.874799 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:25:10.874810 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:25:10.874820 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:25:10.874830 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 5 00:25:10.874841 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 5 00:25:10.874851 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 5 00:25:10.874865 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 5 00:25:10.874875 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 5 00:25:10.874920 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 5 00:25:10.874931 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 5 00:25:10.874942 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 5 00:25:10.874952 kernel: No NUMA configuration found Sep 5 00:25:10.874962 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 5 00:25:10.874973 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 5 00:25:10.874983 kernel: Zone ranges: Sep 5 00:25:10.875000 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:25:10.875012 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 5 00:25:10.875024 kernel: Normal empty Sep 5 00:25:10.875034 kernel: Device empty Sep 5 00:25:10.875044 kernel: Movable zone start for each node Sep 5 00:25:10.875054 kernel: Early memory node ranges Sep 5 00:25:10.875065 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 5 00:25:10.875075 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 5 00:25:10.875089 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 5 00:25:10.875103 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 5 00:25:10.875114 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 5 00:25:10.875124 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 5 00:25:10.875134 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 5 00:25:10.875144 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 5 00:25:10.875154 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 5 00:25:10.875164 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:25:10.875178 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 5 00:25:10.875201 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 5 00:25:10.875211 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:25:10.875222 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 5 00:25:10.875232 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 5 00:25:10.875243 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 5 00:25:10.875257 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 5 00:25:10.875268 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 5 00:25:10.875279 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:25:10.875290 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:25:10.875304 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:25:10.875315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:25:10.875325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:25:10.875336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:25:10.875347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:25:10.875357 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:25:10.875368 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:25:10.875379 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:25:10.875389 kernel: TSC deadline timer available Sep 5 00:25:10.875403 kernel: CPU topo: Max. logical packages: 1 Sep 5 00:25:10.875414 kernel: CPU topo: Max. logical dies: 1 Sep 5 00:25:10.875424 kernel: CPU topo: Max. dies per package: 1 Sep 5 00:25:10.875434 kernel: CPU topo: Max. threads per core: 1 Sep 5 00:25:10.875444 kernel: CPU topo: Num. cores per package: 4 Sep 5 00:25:10.875454 kernel: CPU topo: Num. threads per package: 4 Sep 5 00:25:10.875464 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 00:25:10.875475 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:25:10.875485 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:25:10.875495 kernel: kvm-guest: setup PV sched yield Sep 5 00:25:10.875509 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 5 00:25:10.875520 kernel: Booting paravirtualized kernel on KVM Sep 5 00:25:10.875531 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:25:10.875542 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:25:10.875552 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 00:25:10.875563 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 00:25:10.875573 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:25:10.875584 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:25:10.875594 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:25:10.875610 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:25:10.875626 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:25:10.875636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:25:10.875647 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:25:10.875658 kernel: Fallback order for Node 0: 0 Sep 5 00:25:10.875669 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 5 00:25:10.875679 kernel: Policy zone: DMA32 Sep 5 00:25:10.875690 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:25:10.875705 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:25:10.875715 kernel: ftrace: allocating 40102 entries in 157 pages Sep 5 00:25:10.875726 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 00:25:10.875736 kernel: Dynamic Preempt: voluntary Sep 5 00:25:10.875747 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:25:10.875758 kernel: rcu: RCU event tracing is enabled. Sep 5 00:25:10.875769 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:25:10.875780 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:25:10.875791 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:25:10.875805 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:25:10.875815 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:25:10.875829 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:25:10.875840 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:25:10.875851 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:25:10.875862 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:25:10.875872 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:25:10.875910 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:25:10.875922 kernel: Console: colour dummy device 80x25 Sep 5 00:25:10.875937 kernel: printk: legacy console [ttyS0] enabled Sep 5 00:25:10.875948 kernel: ACPI: Core revision 20240827 Sep 5 00:25:10.875958 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:25:10.875969 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:25:10.875980 kernel: x2apic enabled Sep 5 00:25:10.875990 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:25:10.876001 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:25:10.876012 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:25:10.876022 kernel: kvm-guest: setup PV IPIs Sep 5 00:25:10.876037 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:25:10.876047 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:25:10.876058 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:25:10.876069 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:25:10.876080 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:25:10.876091 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:25:10.876101 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:25:10.876112 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:25:10.876122 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:25:10.876136 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:25:10.876147 kernel: active return thunk: retbleed_return_thunk Sep 5 00:25:10.876157 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:25:10.876172 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:25:10.876183 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:25:10.876193 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:25:10.876205 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:25:10.876216 kernel: active return thunk: srso_return_thunk Sep 5 00:25:10.876231 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:25:10.876241 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:25:10.876252 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:25:10.876263 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:25:10.876273 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:25:10.876284 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:25:10.876295 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:25:10.876305 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:25:10.876316 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 00:25:10.876330 kernel: landlock: Up and running. Sep 5 00:25:10.876340 kernel: SELinux: Initializing. Sep 5 00:25:10.876351 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:25:10.876361 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:25:10.876372 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:25:10.876383 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:25:10.876394 kernel: ... version: 0 Sep 5 00:25:10.876404 kernel: ... bit width: 48 Sep 5 00:25:10.876414 kernel: ... generic registers: 6 Sep 5 00:25:10.876428 kernel: ... value mask: 0000ffffffffffff Sep 5 00:25:10.876439 kernel: ... max period: 00007fffffffffff Sep 5 00:25:10.876449 kernel: ... fixed-purpose events: 0 Sep 5 00:25:10.876460 kernel: ... event mask: 000000000000003f Sep 5 00:25:10.876470 kernel: signal: max sigframe size: 1776 Sep 5 00:25:10.876481 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:25:10.876492 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:25:10.876506 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 00:25:10.876517 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:25:10.876531 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:25:10.876543 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:25:10.876553 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:25:10.876563 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:25:10.876574 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2428K rwdata, 9956K rodata, 54044K init, 2924K bss, 137196K reserved, 0K cma-reserved) Sep 5 00:25:10.876584 kernel: devtmpfs: initialized Sep 5 00:25:10.876594 kernel: x86/mm: Memory block size: 128MB Sep 5 00:25:10.876604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 5 00:25:10.876615 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 5 00:25:10.876629 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 5 00:25:10.876640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 5 00:25:10.876650 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 5 00:25:10.876660 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 5 00:25:10.876671 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:25:10.876682 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:25:10.876696 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:25:10.876707 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:25:10.876718 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:25:10.876732 kernel: audit: type=2000 audit(1757031908.687:1): state=initialized audit_enabled=0 res=1 Sep 5 00:25:10.876743 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:25:10.876753 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:25:10.876763 kernel: cpuidle: using governor menu Sep 5 00:25:10.876774 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:25:10.876784 kernel: dca service started, version 1.12.1 Sep 5 00:25:10.876795 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 5 00:25:10.876805 kernel: PCI: Using configuration type 1 for base access Sep 5 00:25:10.876816 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:25:10.876830 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:25:10.876841 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:25:10.876851 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:25:10.876862 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:25:10.876872 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:25:10.876921 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:25:10.876934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:25:10.876944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:25:10.876955 kernel: ACPI: Interpreter enabled Sep 5 00:25:10.876970 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:25:10.876980 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:25:10.876991 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:25:10.877002 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:25:10.877012 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:25:10.877022 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:25:10.877331 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:25:10.877500 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:25:10.877670 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:25:10.877687 kernel: PCI host bridge to bus 0000:00 Sep 5 00:25:10.877850 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:25:10.878077 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:25:10.878234 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:25:10.878393 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 5 00:25:10.878543 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 5 00:25:10.878695 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 5 00:25:10.878829 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:25:10.879037 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 00:25:10.879202 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 00:25:10.879327 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 5 00:25:10.879447 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 5 00:25:10.879572 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 5 00:25:10.879704 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:25:10.879860 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 00:25:10.880012 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 5 00:25:10.880156 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 5 00:25:10.880319 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 5 00:25:10.880464 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 00:25:10.880594 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 5 00:25:10.880732 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 5 00:25:10.880885 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 5 00:25:10.881115 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 00:25:10.881248 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 5 00:25:10.881368 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 5 00:25:10.881489 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 5 00:25:10.881615 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 5 00:25:10.881771 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 00:25:10.881934 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:25:10.882092 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 00:25:10.882225 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 5 00:25:10.882355 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 5 00:25:10.882582 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 00:25:10.882759 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 5 00:25:10.882782 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:25:10.882800 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:25:10.882809 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:25:10.882817 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:25:10.882825 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:25:10.882833 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:25:10.882846 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:25:10.882855 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:25:10.882863 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:25:10.882872 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:25:10.882884 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:25:10.882922 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:25:10.882945 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:25:10.882973 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:25:10.882984 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:25:10.883000 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:25:10.883011 kernel: iommu: Default domain type: Translated Sep 5 00:25:10.883021 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:25:10.883032 kernel: efivars: Registered efivars operations Sep 5 00:25:10.883043 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:25:10.883053 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:25:10.883064 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 5 00:25:10.883074 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 5 00:25:10.883110 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 5 00:25:10.883127 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 5 00:25:10.883138 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 5 00:25:10.883148 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 5 00:25:10.883159 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 5 00:25:10.883170 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 5 00:25:10.883336 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:25:10.883499 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:25:10.883663 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:25:10.883687 kernel: vgaarb: loaded Sep 5 00:25:10.883698 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:25:10.883709 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:25:10.883720 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:25:10.883730 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:25:10.883741 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:25:10.883753 kernel: pnp: PnP ACPI init Sep 5 00:25:10.884014 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 5 00:25:10.884039 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:25:10.884048 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:25:10.884056 kernel: NET: Registered PF_INET protocol family Sep 5 00:25:10.884065 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:25:10.884073 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:25:10.884081 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:25:10.884089 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:25:10.884098 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:25:10.884106 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:25:10.884117 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:25:10.884126 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:25:10.884135 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:25:10.884143 kernel: NET: Registered PF_XDP protocol family Sep 5 00:25:10.884274 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 5 00:25:10.884410 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 5 00:25:10.884549 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:25:10.884707 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:25:10.884920 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:25:10.885071 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 5 00:25:10.885213 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 5 00:25:10.885332 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 5 00:25:10.885343 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:25:10.885352 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:25:10.885361 kernel: Initialise system trusted keyrings Sep 5 00:25:10.885375 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:25:10.885383 kernel: Key type asymmetric registered Sep 5 00:25:10.885391 kernel: Asymmetric key parser 'x509' registered Sep 5 00:25:10.885402 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:25:10.885411 kernel: io scheduler mq-deadline registered Sep 5 00:25:10.885419 kernel: io scheduler kyber registered Sep 5 00:25:10.885428 kernel: io scheduler bfq registered Sep 5 00:25:10.885439 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:25:10.885448 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:25:10.885457 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:25:10.885465 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:25:10.885474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:25:10.885482 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:25:10.885491 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:25:10.885499 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:25:10.885507 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:25:10.885754 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:25:10.885770 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:25:10.885986 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:25:10.886214 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:25:10 UTC (1757031910) Sep 5 00:25:10.886391 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 5 00:25:10.886409 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:25:10.886422 kernel: efifb: probing for efifb Sep 5 00:25:10.886436 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 5 00:25:10.886450 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 5 00:25:10.886459 kernel: efifb: scrolling: redraw Sep 5 00:25:10.886469 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 5 00:25:10.886478 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 00:25:10.886486 kernel: fb0: EFI VGA frame buffer device Sep 5 00:25:10.886495 kernel: pstore: Using crash dump compression: deflate Sep 5 00:25:10.886504 kernel: pstore: Registered efi_pstore as persistent store backend Sep 5 00:25:10.886512 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:25:10.886520 kernel: Segment Routing with IPv6 Sep 5 00:25:10.886531 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:25:10.886540 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:25:10.886548 kernel: Key type dns_resolver registered Sep 5 00:25:10.886556 kernel: IPI shorthand broadcast: enabled Sep 5 00:25:10.886564 kernel: sched_clock: Marking stable (3720002229, 160000119)->(3895337558, -15335210) Sep 5 00:25:10.886575 kernel: registered taskstats version 1 Sep 5 00:25:10.886587 kernel: Loading compiled-in X.509 certificates Sep 5 00:25:10.886598 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 55c9ce6358d6eed45ca94030a2308729ee6a249f' Sep 5 00:25:10.886609 kernel: Demotion targets for Node 0: null Sep 5 00:25:10.886624 kernel: Key type .fscrypt registered Sep 5 00:25:10.886635 kernel: Key type fscrypt-provisioning registered Sep 5 00:25:10.886646 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:25:10.886657 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:25:10.886668 kernel: ima: No architecture policies found Sep 5 00:25:10.886679 kernel: clk: Disabling unused clocks Sep 5 00:25:10.886689 kernel: Warning: unable to open an initial console. Sep 5 00:25:10.886700 kernel: Freeing unused kernel image (initmem) memory: 54044K Sep 5 00:25:10.886712 kernel: Write protecting the kernel read-only data: 24576k Sep 5 00:25:10.886727 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 5 00:25:10.886738 kernel: Run /init as init process Sep 5 00:25:10.886749 kernel: with arguments: Sep 5 00:25:10.886759 kernel: /init Sep 5 00:25:10.886770 kernel: with environment: Sep 5 00:25:10.886780 kernel: HOME=/ Sep 5 00:25:10.886790 kernel: TERM=linux Sep 5 00:25:10.886800 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:25:10.886812 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:25:10.886831 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:25:10.886843 systemd[1]: Detected virtualization kvm. Sep 5 00:25:10.886853 systemd[1]: Detected architecture x86-64. Sep 5 00:25:10.886864 systemd[1]: Running in initrd. Sep 5 00:25:10.886874 systemd[1]: No hostname configured, using default hostname. Sep 5 00:25:10.886885 systemd[1]: Hostname set to . Sep 5 00:25:10.886925 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:25:10.886941 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:25:10.886952 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:25:10.886963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:25:10.886976 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:25:10.886987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:25:10.886999 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:25:10.887011 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:25:10.887027 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:25:10.887039 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:25:10.887050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:25:10.887062 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:25:10.887073 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:25:10.887084 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:25:10.887095 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:25:10.887107 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:25:10.887121 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:25:10.887132 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:25:10.887144 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:25:10.887165 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:25:10.887180 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:25:10.887191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:25:10.887203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:25:10.887214 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:25:10.887225 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:25:10.887252 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:25:10.887275 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:25:10.887287 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 00:25:10.887299 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:25:10.887311 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:25:10.887322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:25:10.887334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:25:10.887345 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:25:10.887365 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:25:10.887377 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:25:10.887389 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:25:10.887466 systemd-journald[220]: Collecting audit messages is disabled. Sep 5 00:25:10.887506 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:25:10.887519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:25:10.887542 systemd-journald[220]: Journal started Sep 5 00:25:10.887577 systemd-journald[220]: Runtime Journal (/run/log/journal/993098421d394c63af41f81a1837e2e3) is 6M, max 48.4M, 42.4M free. Sep 5 00:25:10.871679 systemd-modules-load[221]: Inserted module 'overlay' Sep 5 00:25:10.889947 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:25:10.896049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:25:10.902295 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:25:10.903989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:25:10.906067 kernel: Bridge firewalling registered Sep 5 00:25:10.904509 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 5 00:25:10.908596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:25:10.911679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:25:10.912128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:25:10.923756 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:25:10.931057 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 00:25:10.936433 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:25:10.937352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:25:10.939612 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:25:10.958175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:25:10.960818 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:25:10.995676 systemd-resolved[256]: Positive Trust Anchors: Sep 5 00:25:10.995700 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:25:10.995728 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:25:10.998488 systemd-resolved[256]: Defaulting to hostname 'linux'. Sep 5 00:25:10.999772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:25:11.007240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:25:11.014443 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:25:11.138939 kernel: SCSI subsystem initialized Sep 5 00:25:11.148931 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:25:11.159930 kernel: iscsi: registered transport (tcp) Sep 5 00:25:11.188938 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:25:11.189015 kernel: QLogic iSCSI HBA Driver Sep 5 00:25:11.213210 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:25:11.240636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:25:11.242220 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:25:11.312319 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:25:11.315195 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:25:11.384957 kernel: raid6: avx2x4 gen() 28971 MB/s Sep 5 00:25:11.401920 kernel: raid6: avx2x2 gen() 30833 MB/s Sep 5 00:25:11.419015 kernel: raid6: avx2x1 gen() 25694 MB/s Sep 5 00:25:11.419098 kernel: raid6: using algorithm avx2x2 gen() 30833 MB/s Sep 5 00:25:11.437003 kernel: raid6: .... xor() 19811 MB/s, rmw enabled Sep 5 00:25:11.437112 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:25:11.457942 kernel: xor: automatically using best checksumming function avx Sep 5 00:25:11.631958 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:25:11.641484 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:25:11.644767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:25:11.679258 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 5 00:25:11.685304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:25:11.688681 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:25:11.716486 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 5 00:25:11.752141 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:25:11.753672 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:25:11.861453 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:25:11.865790 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:25:11.899920 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:25:11.902445 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:25:11.910033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:25:11.910088 kernel: GPT:9289727 != 19775487 Sep 5 00:25:11.910100 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:25:11.910110 kernel: GPT:9289727 != 19775487 Sep 5 00:25:11.910120 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:25:11.910130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:25:11.918918 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 00:25:11.920928 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:25:11.929907 kernel: AES CTR mode by8 optimization enabled Sep 5 00:25:11.949993 kernel: libata version 3.00 loaded. Sep 5 00:25:11.951027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:25:11.951152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:25:11.953737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:25:11.958220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:25:11.958640 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:25:11.972315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:25:11.972448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:25:11.977132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:25:11.979923 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:25:11.982560 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:25:11.982596 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 00:25:11.982768 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 00:25:11.982947 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:25:11.986908 kernel: scsi host0: ahci Sep 5 00:25:11.990717 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:25:11.991262 kernel: scsi host1: ahci Sep 5 00:25:11.991497 kernel: scsi host2: ahci Sep 5 00:25:11.993329 kernel: scsi host3: ahci Sep 5 00:25:11.993778 kernel: scsi host4: ahci Sep 5 00:25:11.993995 kernel: scsi host5: ahci Sep 5 00:25:11.995190 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 5 00:25:11.996230 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 5 00:25:11.996253 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 5 00:25:11.997965 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 5 00:25:11.997990 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 5 00:25:11.999851 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 5 00:25:12.008419 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:25:12.021434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:25:12.037574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:25:12.044484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:25:12.045714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:25:12.047040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:25:12.080748 disk-uuid[636]: Primary Header is updated. Sep 5 00:25:12.080748 disk-uuid[636]: Secondary Entries is updated. Sep 5 00:25:12.080748 disk-uuid[636]: Secondary Header is updated. Sep 5 00:25:12.084926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:25:12.089916 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:25:12.308000 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:25:12.308071 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:25:12.308082 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:25:12.308941 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:25:12.309926 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:25:12.310916 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:25:12.312168 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:25:12.312193 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:25:12.312204 kernel: ata3.00: applying bridge limits Sep 5 00:25:12.313314 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:25:12.313327 kernel: ata3.00: configured for UDMA/100 Sep 5 00:25:12.315924 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:25:12.374457 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:25:12.374677 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:25:12.388937 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:25:12.666455 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:25:12.669090 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:25:12.671347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:25:12.673500 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:25:12.676501 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:25:12.713108 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:25:13.090931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:25:13.091392 disk-uuid[637]: The operation has completed successfully. Sep 5 00:25:13.129435 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:25:13.129565 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:25:13.154344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:25:13.179400 sh[666]: Success Sep 5 00:25:13.197244 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:25:13.197280 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:25:13.198276 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 00:25:13.206910 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 00:25:13.238111 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:25:13.240398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:25:13.258654 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:25:13.262910 kernel: BTRFS: device fsid bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (678) Sep 5 00:25:13.262934 kernel: BTRFS info (device dm-0): first mount of filesystem bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 Sep 5 00:25:13.264318 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:25:13.269293 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:25:13.269319 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 00:25:13.270577 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:25:13.271242 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:25:13.271500 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:25:13.272362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:25:13.273110 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:25:13.303934 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 5 00:25:13.306157 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:25:13.306188 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:25:13.309925 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:25:13.309963 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:25:13.315931 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:25:13.317842 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:25:13.320734 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:25:13.615232 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:25:14.043165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:25:14.051376 ignition[754]: Ignition 2.21.0 Sep 5 00:25:14.051390 ignition[754]: Stage: fetch-offline Sep 5 00:25:14.051471 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:25:14.051484 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:25:14.051638 ignition[754]: parsed url from cmdline: "" Sep 5 00:25:14.051642 ignition[754]: no config URL provided Sep 5 00:25:14.051647 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:25:14.051659 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:25:14.051692 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 5 00:25:14.051698 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:25:14.062302 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 5 00:25:14.100429 systemd-networkd[853]: lo: Link UP Sep 5 00:25:14.100440 systemd-networkd[853]: lo: Gained carrier Sep 5 00:25:14.102177 systemd-networkd[853]: Enumeration completed Sep 5 00:25:14.102286 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:25:14.102459 systemd[1]: Reached target network.target - Network. Sep 5 00:25:14.102588 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:25:14.102593 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:25:14.104925 systemd-networkd[853]: eth0: Link UP Sep 5 00:25:14.105520 systemd-networkd[853]: eth0: Gained carrier Sep 5 00:25:14.105530 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:25:14.117344 ignition[754]: parsing config with SHA512: eadc3bf3f88f106b18ad46a24c1c77d1b609d617956946cfa91cf06d286e141792573dc6b9defed5e87de97e05381ff228941bf88f6a8e05839e719e348b8b86 Sep 5 00:25:14.128310 unknown[754]: fetched base config from "system" Sep 5 00:25:14.128326 unknown[754]: fetched user config from "qemu" Sep 5 00:25:14.129100 ignition[754]: fetch-offline: fetch-offline passed Sep 5 00:25:14.129180 ignition[754]: Ignition finished successfully Sep 5 00:25:14.132025 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:25:14.132970 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:25:14.135181 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:25:14.136292 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:25:14.185220 ignition[860]: Ignition 2.21.0 Sep 5 00:25:14.185235 ignition[860]: Stage: kargs Sep 5 00:25:14.185424 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:25:14.185436 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:25:14.190496 ignition[860]: kargs: kargs passed Sep 5 00:25:14.192529 ignition[860]: Ignition finished successfully Sep 5 00:25:14.196971 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:25:14.199133 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:25:14.238383 ignition[869]: Ignition 2.21.0 Sep 5 00:25:14.238400 ignition[869]: Stage: disks Sep 5 00:25:14.238670 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:25:14.238687 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:25:14.243050 ignition[869]: disks: disks passed Sep 5 00:25:14.243143 ignition[869]: Ignition finished successfully Sep 5 00:25:14.247644 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:25:14.249018 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:25:14.250910 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:25:14.251126 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:25:14.251436 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:25:14.251750 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:25:14.259209 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:25:14.282504 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 00:25:14.580080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:25:14.581311 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:25:14.734919 kernel: EXT4-fs (vda9): mounted filesystem a99dab41-6cdd-4037-a941-eeee48403b9e r/w with ordered data mode. Quota mode: none. Sep 5 00:25:14.735385 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:25:14.736093 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:25:14.739655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:25:14.740769 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:25:14.742428 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:25:14.742487 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:25:14.742523 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:25:14.774636 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:25:14.778297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:25:14.781933 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 5 00:25:14.784653 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:25:14.784685 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:25:14.787942 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:25:14.788542 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:25:14.789430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:25:14.819533 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:25:14.824645 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:25:14.830022 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:25:14.834045 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:25:14.938659 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:25:14.941469 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:25:14.943211 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:25:14.985438 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:25:14.986752 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:25:15.000105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:25:15.024691 ignition[1001]: INFO : Ignition 2.21.0 Sep 5 00:25:15.024691 ignition[1001]: INFO : Stage: mount Sep 5 00:25:15.026730 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:25:15.026730 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:25:15.028857 ignition[1001]: INFO : mount: mount passed Sep 5 00:25:15.028857 ignition[1001]: INFO : Ignition finished successfully Sep 5 00:25:15.033038 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:25:15.035104 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:25:15.589227 systemd-networkd[853]: eth0: Gained IPv6LL Sep 5 00:25:15.738161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:25:15.764609 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 5 00:25:15.764667 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:25:15.764683 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:25:15.768920 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:25:15.768980 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:25:15.770949 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:25:15.807150 ignition[1030]: INFO : Ignition 2.21.0 Sep 5 00:25:15.807150 ignition[1030]: INFO : Stage: files Sep 5 00:25:15.809290 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:25:15.809290 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:25:15.809290 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:25:15.813540 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:25:15.813540 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:25:15.819211 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:25:15.821008 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:25:15.823114 unknown[1030]: wrote ssh authorized keys file for user: core Sep 5 00:25:15.824619 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:25:15.826557 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 00:25:15.828986 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 5 00:25:16.032395 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:25:16.267683 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 00:25:16.267683 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:25:16.271764 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:25:16.271764 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:25:16.275581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:25:16.275581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:25:16.275581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:25:16.275581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:25:16.275581 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:25:16.315539 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:25:16.317652 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:25:16.317652 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:25:16.347162 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:25:16.347162 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:25:16.351768 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 5 00:25:16.769094 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:25:17.380006 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:25:17.380006 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:25:17.501309 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:25:17.914141 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:25:17.914141 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:25:17.914141 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:25:17.918735 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:25:17.918735 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:25:17.918735 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:25:17.918735 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:25:17.947612 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:25:17.956082 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:25:17.958070 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:25:17.958070 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:25:17.958070 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:25:17.958070 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:25:17.958070 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:25:17.958070 ignition[1030]: INFO : files: files passed Sep 5 00:25:17.958070 ignition[1030]: INFO : Ignition finished successfully Sep 5 00:25:17.959677 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:25:17.961587 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:25:17.966927 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:25:17.989205 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:25:17.989379 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:25:17.993583 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:25:17.997479 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:25:18.008753 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:25:18.010362 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:25:18.013777 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:25:18.015364 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:25:18.018919 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:25:18.089586 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:25:18.089738 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:25:18.091112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:25:18.093217 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:25:18.096191 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:25:18.097270 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:25:18.134997 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:25:18.138115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:25:18.163215 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:25:18.163402 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:25:18.167511 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:25:18.169852 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:25:18.170018 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:25:18.173784 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:25:18.173979 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:25:18.176241 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:25:18.176601 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:25:18.177012 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:25:18.183774 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:25:18.185255 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:25:18.185607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:25:18.186020 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:25:18.186515 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:25:18.186840 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:25:18.187223 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:25:18.187371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:25:18.197304 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:25:18.197701 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:25:18.198181 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:25:18.203453 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:25:18.204471 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:25:18.204643 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:25:18.209584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:25:18.209732 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:25:18.210800 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:25:18.214393 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:25:18.219002 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:25:18.219206 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:25:18.222619 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:25:18.223486 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:25:18.223585 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:25:18.225187 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:25:18.225274 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:25:18.226874 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:25:18.227015 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:25:18.228569 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:25:18.228686 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:25:18.233398 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:25:18.236369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:25:18.237997 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:25:18.238137 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:25:18.239135 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:25:18.239260 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:25:18.251649 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:25:18.253072 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:25:18.274859 ignition[1085]: INFO : Ignition 2.21.0 Sep 5 00:25:18.274859 ignition[1085]: INFO : Stage: umount Sep 5 00:25:18.276782 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:25:18.276782 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:25:18.277120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:25:18.281837 ignition[1085]: INFO : umount: umount passed Sep 5 00:25:18.283108 ignition[1085]: INFO : Ignition finished successfully Sep 5 00:25:18.283767 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:25:18.283912 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:25:18.286765 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:25:18.286906 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:25:18.287983 systemd[1]: Stopped target network.target - Network. Sep 5 00:25:18.288241 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:25:18.288291 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:25:18.288585 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:25:18.288632 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:25:18.288914 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:25:18.288964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:25:18.294225 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:25:18.294270 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:25:18.294531 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:25:18.294574 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:25:18.295018 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:25:18.299443 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:25:18.310559 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:25:18.310742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:25:18.317484 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:25:18.317830 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:25:18.317989 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:25:18.322488 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:25:18.323394 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 00:25:18.323858 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:25:18.323989 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:25:18.328828 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:25:18.329090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:25:18.329160 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:25:18.329473 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:25:18.329535 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:25:18.335541 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:25:18.335607 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:25:18.336567 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:25:18.336622 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:25:18.340465 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:25:18.346198 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:25:18.346274 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:25:18.363469 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:25:18.363730 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:25:18.366314 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:25:18.366400 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:25:18.368088 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:25:18.368140 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:25:18.375728 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:25:18.375785 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:25:18.377986 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:25:18.378036 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:25:18.381023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:25:18.381075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:25:18.386266 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:25:18.389555 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 00:25:18.389623 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:25:18.393088 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:25:18.393156 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:25:18.396471 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:25:18.396522 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:25:18.399992 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:25:18.400040 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:25:18.402479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:25:18.402529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:25:18.406799 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 5 00:25:18.406858 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 5 00:25:18.406927 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 5 00:25:18.406986 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:25:18.407326 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:25:18.408138 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:25:18.417430 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:25:18.417606 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:25:18.435554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:25:18.439626 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:25:18.468831 systemd[1]: Switching root. Sep 5 00:25:18.513318 systemd-journald[220]: Journal stopped Sep 5 00:25:19.815767 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 5 00:25:19.815840 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:25:19.815867 kernel: SELinux: policy capability open_perms=1 Sep 5 00:25:19.815995 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:25:19.816030 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:25:19.816044 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:25:19.816066 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:25:19.816081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:25:19.816235 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:25:19.816261 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 00:25:19.816277 kernel: audit: type=1403 audit(1757031918.876:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:25:19.816294 systemd[1]: Successfully loaded SELinux policy in 68.407ms. Sep 5 00:25:19.816331 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.666ms. Sep 5 00:25:19.816350 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:25:19.816370 systemd[1]: Detected virtualization kvm. Sep 5 00:25:19.816386 systemd[1]: Detected architecture x86-64. Sep 5 00:25:19.816401 systemd[1]: Detected first boot. Sep 5 00:25:19.816417 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:25:19.816433 zram_generator::config[1130]: No configuration found. Sep 5 00:25:19.816450 kernel: Guest personality initialized and is inactive Sep 5 00:25:19.816466 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 00:25:19.816488 kernel: Initialized host personality Sep 5 00:25:19.816503 kernel: NET: Registered PF_VSOCK protocol family Sep 5 00:25:19.816519 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:25:19.816538 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:25:19.816556 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:25:19.816573 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:25:19.816597 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:25:19.816614 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:25:19.816641 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:25:19.816666 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:25:19.816683 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:25:19.816700 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:25:19.816717 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:25:19.816733 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:25:19.816750 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:25:19.816766 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:25:19.816787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:25:19.816810 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:25:19.816826 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:25:19.816842 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:25:19.816861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:25:19.816880 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:25:19.816931 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:25:19.816949 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:25:19.816966 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:25:19.816992 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:25:19.817010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:25:19.817029 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:25:19.817046 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:25:19.817062 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:25:19.817078 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:25:19.817094 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:25:19.817111 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:25:19.817128 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:25:19.817152 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:25:19.817169 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:25:19.817185 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:25:19.817202 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:25:19.817217 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:25:19.817233 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:25:19.817250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:25:19.817266 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:25:19.817283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:25:19.817308 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:25:19.817325 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:25:19.817342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:25:19.817359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:25:19.817376 systemd[1]: Reached target machines.target - Containers. Sep 5 00:25:19.817393 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:25:19.817409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:25:19.817425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:25:19.817450 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:25:19.817467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:25:19.817483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:25:19.817499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:25:19.817516 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:25:19.817532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:25:19.817548 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:25:19.817565 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:25:19.817582 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:25:19.817607 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:25:19.817624 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:25:19.817651 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:25:19.817668 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:25:19.817684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:25:19.817701 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:25:19.817719 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:25:19.817736 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:25:19.817762 kernel: loop: module loaded Sep 5 00:25:19.817785 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:25:19.817808 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:25:19.817826 systemd[1]: Stopped verity-setup.service. Sep 5 00:25:19.817844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:25:19.817860 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:25:19.817877 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:25:19.817921 kernel: fuse: init (API version 7.41) Sep 5 00:25:19.817939 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:25:19.817956 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:25:19.817972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:25:19.817998 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:25:19.818015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:25:19.818031 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:25:19.818047 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:25:19.818064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:25:19.818081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:25:19.818097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:25:19.818114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:25:19.818140 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:25:19.818157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:25:19.818174 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:25:19.818191 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:25:19.818207 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:25:19.818223 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:25:19.818240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:25:19.818257 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:25:19.818273 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:25:19.818326 systemd-journald[1194]: Collecting audit messages is disabled. Sep 5 00:25:19.818356 kernel: ACPI: bus type drm_connector registered Sep 5 00:25:19.818372 systemd-journald[1194]: Journal started Sep 5 00:25:19.818409 systemd-journald[1194]: Runtime Journal (/run/log/journal/993098421d394c63af41f81a1837e2e3) is 6M, max 48.4M, 42.4M free. Sep 5 00:25:19.494668 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:25:19.515080 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:25:19.515588 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:25:19.824912 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:25:19.824960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:25:19.834918 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:25:19.837939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:25:19.856919 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:25:19.860067 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:25:19.872520 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:25:19.872705 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:25:19.878926 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:25:19.880512 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:25:19.880767 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:25:19.903392 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:25:19.903618 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:25:19.908326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:25:19.909100 kernel: loop0: detected capacity change from 0 to 224512 Sep 5 00:25:19.911460 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:25:19.913425 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:25:19.916176 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:25:19.919254 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:25:19.935361 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:25:19.935769 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 5 00:25:19.935800 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 5 00:25:19.943038 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:25:19.947120 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:25:19.953173 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:25:19.954378 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:25:19.960340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:25:19.963183 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:25:19.965445 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:25:19.968670 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:25:19.971706 systemd-journald[1194]: Time spent on flushing to /var/log/journal/993098421d394c63af41f81a1837e2e3 is 30.530ms for 1085 entries. Sep 5 00:25:19.971706 systemd-journald[1194]: System Journal (/var/log/journal/993098421d394c63af41f81a1837e2e3) is 8M, max 195.6M, 187.6M free. Sep 5 00:25:20.015480 systemd-journald[1194]: Received client request to flush runtime journal. Sep 5 00:25:20.015562 kernel: loop1: detected capacity change from 0 to 111000 Sep 5 00:25:20.015596 kernel: loop2: detected capacity change from 0 to 128016 Sep 5 00:25:19.976372 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:25:19.992150 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:25:19.994207 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:25:20.020096 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:25:20.039675 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:25:20.044336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:25:20.049929 kernel: loop3: detected capacity change from 0 to 224512 Sep 5 00:25:20.060973 kernel: loop4: detected capacity change from 0 to 111000 Sep 5 00:25:20.075833 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 5 00:25:20.075859 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 5 00:25:20.076928 kernel: loop5: detected capacity change from 0 to 128016 Sep 5 00:25:20.082226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:25:20.090332 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:25:20.091957 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 5 00:25:20.099931 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:25:20.100403 systemd[1]: Reloading... Sep 5 00:25:20.183599 zram_generator::config[1304]: No configuration found. Sep 5 00:25:20.283964 ldconfig[1213]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:25:20.408679 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:25:20.408864 systemd[1]: Reloading finished in 307 ms. Sep 5 00:25:20.432479 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:25:20.434194 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:25:20.448681 systemd[1]: Starting ensure-sysext.service... Sep 5 00:25:20.451028 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:25:20.463570 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:25:20.463593 systemd[1]: Reloading... Sep 5 00:25:20.472581 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 00:25:20.472800 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 00:25:20.473183 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:25:20.473460 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:25:20.474526 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:25:20.474810 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 00:25:20.474882 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 00:25:20.479579 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:25:20.479587 systemd-tmpfiles[1340]: Skipping /boot Sep 5 00:25:20.490603 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:25:20.490626 systemd-tmpfiles[1340]: Skipping /boot Sep 5 00:25:20.524939 zram_generator::config[1367]: No configuration found. Sep 5 00:25:20.725402 systemd[1]: Reloading finished in 261 ms. Sep 5 00:25:20.746625 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:25:20.763658 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:25:20.772472 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:25:20.774879 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:25:20.777503 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:25:20.791099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:25:20.794970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:25:20.798486 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:25:20.804065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:25:20.804306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:25:20.808138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:25:20.811798 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:25:20.818640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:25:20.820281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:25:20.820642 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:25:20.825254 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:25:20.825371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:25:20.833847 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:25:20.836481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:25:20.837344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:25:20.845620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:25:20.846042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:25:20.848650 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:25:20.849058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:25:20.859731 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 5 00:25:20.862006 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:25:20.868474 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:25:20.868941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:25:20.871167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:25:20.876189 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:25:20.880344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:25:20.885334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:25:20.886832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:25:20.887021 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:25:20.891356 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:25:20.892641 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:25:20.894418 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:25:20.896544 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:25:20.898839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:25:20.899107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:25:20.901528 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:25:20.901760 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:25:20.903559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:25:20.904414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:25:20.908539 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:25:20.908774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:25:20.921937 systemd[1]: Finished ensure-sysext.service. Sep 5 00:25:20.935186 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:25:20.936687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:25:20.936802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:25:20.943166 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:25:20.947802 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:25:20.948476 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:25:21.043868 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:25:21.469919 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:25:21.482970 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:25:21.513777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:25:21.516995 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:25:21.518993 augenrules[1491]: No rules Sep 5 00:25:21.554925 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:25:21.555523 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:25:21.555961 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:25:22.066924 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:25:22.068049 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:25:22.099580 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 5 00:25:22.100028 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:25:22.100395 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:25:22.140852 kernel: kvm_amd: TSC scaling supported Sep 5 00:25:22.140998 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:25:22.141015 kernel: kvm_amd: Nested Paging enabled Sep 5 00:25:22.141030 kernel: kvm_amd: LBR virtualization supported Sep 5 00:25:22.141108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:25:22.142072 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:25:22.142096 kernel: kvm_amd: Virtual GIF supported Sep 5 00:25:22.170922 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:25:22.224842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:25:22.268750 systemd-networkd[1473]: lo: Link UP Sep 5 00:25:22.268760 systemd-networkd[1473]: lo: Gained carrier Sep 5 00:25:22.270587 systemd-networkd[1473]: Enumeration completed Sep 5 00:25:22.270732 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:25:22.271014 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:25:22.271019 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:25:22.271675 systemd-networkd[1473]: eth0: Link UP Sep 5 00:25:22.272100 systemd-networkd[1473]: eth0: Gained carrier Sep 5 00:25:22.272114 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:25:22.274499 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:25:22.280036 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:25:22.282531 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:25:22.283973 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:25:22.286024 systemd-networkd[1473]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:25:22.288276 systemd-resolved[1409]: Positive Trust Anchors: Sep 5 00:25:22.288290 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:25:22.288324 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:25:22.290123 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Sep 5 00:25:24.078322 systemd-timesyncd[1474]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:25:24.078385 systemd-timesyncd[1474]: Initial clock synchronization to Fri 2025-09-05 00:25:24.078217 UTC. Sep 5 00:25:24.079437 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 5 00:25:24.081364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:25:24.082761 systemd[1]: Reached target network.target - Network. Sep 5 00:25:24.083904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:25:24.085273 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:25:24.086602 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:25:24.088068 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:25:24.089477 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 00:25:24.090795 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:25:24.092124 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:25:24.093355 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:25:24.094568 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:25:24.094600 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:25:24.095565 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:25:24.097639 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:25:24.100467 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:25:24.103741 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:25:24.105166 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 00:25:24.106538 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 00:25:24.119075 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:25:24.120584 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:25:24.122888 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:25:24.124325 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:25:24.126693 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:25:24.127708 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:25:24.128694 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:25:24.128733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:25:24.130069 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:25:24.132442 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:25:24.134661 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:25:24.146468 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:25:24.149714 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:25:24.150767 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:25:24.152067 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 00:25:24.156476 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:25:24.159358 jq[1541]: false Sep 5 00:25:24.159841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:25:24.162618 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:25:24.166057 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:25:24.171625 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:25:24.174595 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:25:24.175897 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:25:24.179680 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:25:24.182534 oslogin_cache_refresh[1543]: Refreshing passwd entry cache Sep 5 00:25:24.182381 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:25:24.186235 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing passwd entry cache Sep 5 00:25:24.188578 extend-filesystems[1542]: Found /dev/vda6 Sep 5 00:25:24.191911 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:25:24.193910 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:25:24.195322 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:25:24.197761 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:25:24.203375 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:25:24.205891 extend-filesystems[1542]: Found /dev/vda9 Sep 5 00:25:24.210618 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting users, quitting Sep 5 00:25:24.210618 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:25:24.210605 oslogin_cache_refresh[1543]: Failure getting users, quitting Sep 5 00:25:24.213874 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing group entry cache Sep 5 00:25:24.212941 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:25:24.210638 oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:25:24.215298 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:25:24.219416 extend-filesystems[1542]: Checking size of /dev/vda9 Sep 5 00:25:24.210742 oslogin_cache_refresh[1543]: Refreshing group entry cache Sep 5 00:25:24.224176 jq[1556]: true Sep 5 00:25:24.229366 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting groups, quitting Sep 5 00:25:24.229366 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:25:24.223651 oslogin_cache_refresh[1543]: Failure getting groups, quitting Sep 5 00:25:24.223678 oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:25:24.234168 update_engine[1553]: I20250905 00:25:24.233798 1553 main.cc:92] Flatcar Update Engine starting Sep 5 00:25:24.238631 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 00:25:24.242423 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:25:24.243272 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 00:25:24.250238 extend-filesystems[1542]: Resized partition /dev/vda9 Sep 5 00:25:24.251756 tar[1563]: linux-amd64/LICENSE Sep 5 00:25:24.251756 tar[1563]: linux-amd64/helm Sep 5 00:25:24.256435 extend-filesystems[1582]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 00:25:24.257924 jq[1574]: true Sep 5 00:25:24.267463 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:25:24.288701 dbus-daemon[1539]: [system] SELinux support is enabled Sep 5 00:25:24.291702 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:25:24.296805 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:25:24.296939 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:25:24.302260 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 00:25:24.302303 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:25:24.303187 systemd-logind[1551]: New seat seat0. Sep 5 00:25:24.422346 update_engine[1553]: I20250905 00:25:24.421265 1553 update_check_scheduler.cc:74] Next update check in 11m26s Sep 5 00:25:24.432504 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:25:24.432550 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:25:24.434413 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:25:24.445755 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:25:24.447718 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 00:25:24.454767 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:25:24.499135 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:25:24.527232 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:25:24.527232 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:25:24.527232 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:25:24.526832 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:25:24.544588 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:25:24.544744 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Sep 5 00:25:24.528124 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:25:24.549030 bash[1600]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:25:24.551405 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:25:24.554194 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:25:24.622746 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:25:24.626500 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:25:24.627768 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:25:24.648910 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:25:24.649240 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:25:24.654049 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:25:24.718806 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:25:24.722410 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:25:24.726194 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:25:24.727659 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:25:24.775173 containerd[1576]: time="2025-09-05T00:25:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 00:25:24.776568 containerd[1576]: time="2025-09-05T00:25:24.776520512Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 00:25:24.799991 containerd[1576]: time="2025-09-05T00:25:24.799904385Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="35.016µs" Sep 5 00:25:24.799991 containerd[1576]: time="2025-09-05T00:25:24.799977943Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 00:25:24.800119 containerd[1576]: time="2025-09-05T00:25:24.800034148Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 00:25:24.800520 containerd[1576]: time="2025-09-05T00:25:24.800477199Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 00:25:24.800520 containerd[1576]: time="2025-09-05T00:25:24.800509009Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 00:25:24.800615 containerd[1576]: time="2025-09-05T00:25:24.800590181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:25:24.800751 containerd[1576]: time="2025-09-05T00:25:24.800715526Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:25:24.800751 containerd[1576]: time="2025-09-05T00:25:24.800739551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:25:24.801422 containerd[1576]: time="2025-09-05T00:25:24.801374742Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:25:24.801422 containerd[1576]: time="2025-09-05T00:25:24.801409137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:25:24.801502 containerd[1576]: time="2025-09-05T00:25:24.801442940Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:25:24.801502 containerd[1576]: time="2025-09-05T00:25:24.801465863Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 00:25:24.801806 containerd[1576]: time="2025-09-05T00:25:24.801756017Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 00:25:24.802325 containerd[1576]: time="2025-09-05T00:25:24.802288426Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:25:24.802367 containerd[1576]: time="2025-09-05T00:25:24.802344992Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:25:24.802367 containerd[1576]: time="2025-09-05T00:25:24.802358788Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 00:25:24.802426 containerd[1576]: time="2025-09-05T00:25:24.802403682Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 00:25:24.802763 containerd[1576]: time="2025-09-05T00:25:24.802720586Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 00:25:24.802886 containerd[1576]: time="2025-09-05T00:25:24.802863494Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:25:24.809088 containerd[1576]: time="2025-09-05T00:25:24.809048954Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 00:25:24.809196 containerd[1576]: time="2025-09-05T00:25:24.809129626Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 00:25:24.809196 containerd[1576]: time="2025-09-05T00:25:24.809153460Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 00:25:24.809241 containerd[1576]: time="2025-09-05T00:25:24.809192033Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 00:25:24.809241 containerd[1576]: time="2025-09-05T00:25:24.809209305Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 00:25:24.809241 containerd[1576]: time="2025-09-05T00:25:24.809221929Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 00:25:24.809241 containerd[1576]: time="2025-09-05T00:25:24.809237608Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809250963Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809263136Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809274347Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809285628Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809300486Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809472759Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809511562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809536058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809550585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809563830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 00:25:24.809597 containerd[1576]: time="2025-09-05T00:25:24.809592363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 00:25:24.809799 containerd[1576]: time="2025-09-05T00:25:24.809608434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 00:25:24.809799 containerd[1576]: time="2025-09-05T00:25:24.809621869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 00:25:24.809799 containerd[1576]: time="2025-09-05T00:25:24.809640073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 00:25:24.809799 containerd[1576]: time="2025-09-05T00:25:24.809662575Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 00:25:24.809799 containerd[1576]: time="2025-09-05T00:25:24.809675399Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 00:25:24.810365 containerd[1576]: time="2025-09-05T00:25:24.809787640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 00:25:24.810365 containerd[1576]: time="2025-09-05T00:25:24.810274493Z" level=info msg="Start snapshots syncer" Sep 5 00:25:24.810427 containerd[1576]: time="2025-09-05T00:25:24.810372637Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 00:25:24.811956 containerd[1576]: time="2025-09-05T00:25:24.811767362Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 00:25:24.812234 containerd[1576]: time="2025-09-05T00:25:24.811985101Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 00:25:24.812234 containerd[1576]: time="2025-09-05T00:25:24.812128329Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 00:25:24.812310 containerd[1576]: time="2025-09-05T00:25:24.812280785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 00:25:24.812336 containerd[1576]: time="2025-09-05T00:25:24.812313537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 00:25:24.812336 containerd[1576]: time="2025-09-05T00:25:24.812331470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 00:25:24.812422 containerd[1576]: time="2025-09-05T00:25:24.812362438Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 00:25:24.812422 containerd[1576]: time="2025-09-05T00:25:24.812387836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 00:25:24.812422 containerd[1576]: time="2025-09-05T00:25:24.812402143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 00:25:24.812422 containerd[1576]: time="2025-09-05T00:25:24.812416830Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 00:25:24.812500 containerd[1576]: time="2025-09-05T00:25:24.812443741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 00:25:24.812500 containerd[1576]: time="2025-09-05T00:25:24.812459651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 00:25:24.812500 containerd[1576]: time="2025-09-05T00:25:24.812473497Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812524823Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812543408Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812554298Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812565579Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812577011Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812588933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812600635Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812633627Z" level=info msg="runtime interface created" Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812640850Z" level=info msg="created NRI interface" Sep 5 00:25:24.812663 containerd[1576]: time="2025-09-05T00:25:24.812660718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 00:25:24.812854 containerd[1576]: time="2025-09-05T00:25:24.812677249Z" level=info msg="Connect containerd service" Sep 5 00:25:24.812854 containerd[1576]: time="2025-09-05T00:25:24.812706073Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:25:24.813787 containerd[1576]: time="2025-09-05T00:25:24.813750241Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:25:24.904965 tar[1563]: linux-amd64/README.md Sep 5 00:25:24.951455 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:25:24.959520 containerd[1576]: time="2025-09-05T00:25:24.959438288Z" level=info msg="Start subscribing containerd event" Sep 5 00:25:24.959624 containerd[1576]: time="2025-09-05T00:25:24.959517647Z" level=info msg="Start recovering state" Sep 5 00:25:24.959831 containerd[1576]: time="2025-09-05T00:25:24.959793644Z" level=info msg="Start event monitor" Sep 5 00:25:24.959831 containerd[1576]: time="2025-09-05T00:25:24.959828199Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:25:24.959935 containerd[1576]: time="2025-09-05T00:25:24.959839931Z" level=info msg="Start streaming server" Sep 5 00:25:24.959935 containerd[1576]: time="2025-09-05T00:25:24.959879986Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 00:25:24.959935 containerd[1576]: time="2025-09-05T00:25:24.959898551Z" level=info msg="runtime interface starting up..." Sep 5 00:25:24.959935 containerd[1576]: time="2025-09-05T00:25:24.959908059Z" level=info msg="starting plugins..." Sep 5 00:25:24.959935 containerd[1576]: time="2025-09-05T00:25:24.959929269Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 00:25:24.960234 containerd[1576]: time="2025-09-05T00:25:24.959801900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:25:24.960360 containerd[1576]: time="2025-09-05T00:25:24.960326915Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:25:24.960437 containerd[1576]: time="2025-09-05T00:25:24.960401114Z" level=info msg="containerd successfully booted in 0.186158s" Sep 5 00:25:24.960621 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:25:25.120569 systemd-networkd[1473]: eth0: Gained IPv6LL Sep 5 00:25:25.127554 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:25:25.130166 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:25:25.134271 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:25:25.137651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:25:25.141050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:25:25.200425 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:25:25.202742 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:25:25.203121 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:25:25.207285 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:25:26.805942 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:25:26.808872 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:59038.service - OpenSSH per-connection server daemon (10.0.0.1:59038). Sep 5 00:25:26.928760 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 59038 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:25:26.930620 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:25:26.947253 systemd-logind[1551]: New session 1 of user core. Sep 5 00:25:26.949227 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:25:26.951857 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:25:26.954105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:25:26.956697 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:25:26.979477 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:25:26.996105 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:25:27.001334 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:25:27.021935 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:25:27.024945 systemd-logind[1551]: New session c1 of user core. Sep 5 00:25:27.235439 systemd[1680]: Queued start job for default target default.target. Sep 5 00:25:27.260640 systemd[1680]: Created slice app.slice - User Application Slice. Sep 5 00:25:27.260671 systemd[1680]: Reached target paths.target - Paths. Sep 5 00:25:27.260717 systemd[1680]: Reached target timers.target - Timers. Sep 5 00:25:27.262493 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:25:27.276762 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:25:27.276879 systemd[1680]: Reached target sockets.target - Sockets. Sep 5 00:25:27.276914 systemd[1680]: Reached target basic.target - Basic System. Sep 5 00:25:27.276955 systemd[1680]: Reached target default.target - Main User Target. Sep 5 00:25:27.276989 systemd[1680]: Startup finished in 244ms. Sep 5 00:25:27.277942 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:25:27.287133 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:25:27.289319 systemd[1]: Startup finished in 3.816s (kernel) + 8.198s (initrd) + 6.687s (userspace) = 18.702s. Sep 5 00:25:27.372648 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:59044.service - OpenSSH per-connection server daemon (10.0.0.1:59044). Sep 5 00:25:27.480853 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 59044 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:25:27.482613 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:25:27.487187 systemd-logind[1551]: New session 2 of user core. Sep 5 00:25:27.502144 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:25:27.559474 sshd[1704]: Connection closed by 10.0.0.1 port 59044 Sep 5 00:25:27.560047 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 5 00:25:27.613122 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:59044.service: Deactivated successfully. Sep 5 00:25:27.615589 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:25:27.616343 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:25:27.619630 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:59046.service - OpenSSH per-connection server daemon (10.0.0.1:59046). Sep 5 00:25:27.620485 systemd-logind[1551]: Removed session 2. Sep 5 00:25:27.687096 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 59046 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:25:27.689639 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:25:27.695328 systemd-logind[1551]: New session 3 of user core. Sep 5 00:25:27.754205 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:25:27.790386 kubelet[1677]: E0905 00:25:27.790318 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:25:27.794854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:25:27.795138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:25:27.795702 systemd[1]: kubelet.service: Consumed 2.408s CPU time, 265.7M memory peak. Sep 5 00:25:27.804887 sshd[1714]: Connection closed by 10.0.0.1 port 59046 Sep 5 00:25:27.805306 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 5 00:25:27.818655 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:59046.service: Deactivated successfully. Sep 5 00:25:27.820633 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:25:27.821509 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:25:27.825272 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:59056.service - OpenSSH per-connection server daemon (10.0.0.1:59056). Sep 5 00:25:27.825840 systemd-logind[1551]: Removed session 3. Sep 5 00:25:27.895139 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 59056 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:25:27.896869 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:25:27.901834 systemd-logind[1551]: New session 4 of user core. Sep 5 00:25:27.912141 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:25:27.965409 sshd[1724]: Connection closed by 10.0.0.1 port 59056 Sep 5 00:25:27.965754 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 5 00:25:27.978641 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:59056.service: Deactivated successfully. Sep 5 00:25:27.980497 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:25:27.981228 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:25:27.983953 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:59072.service - OpenSSH per-connection server daemon (10.0.0.1:59072). Sep 5 00:25:27.984737 systemd-logind[1551]: Removed session 4. Sep 5 00:25:28.048574 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 59072 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:25:28.049874 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:25:28.054696 systemd-logind[1551]: New session 5 of user core. Sep 5 00:25:28.066129 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:25:28.125403 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:25:28.125727 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:25:28.139433 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 5 00:25:28.141328 sshd[1733]: Connection closed by 10.0.0.1 port 59072 Sep 5 00:25:28.142040 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 5 00:25:28.162561 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:59072.service: Deactivated successfully. Sep 5 00:25:28.164285 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:25:28.165036 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:25:28.168102 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:59080.service - OpenSSH per-connection server daemon (10.0.0.1:59080). Sep 5 00:25:28.168736 systemd-logind[1551]: Removed session 5. Sep 5 00:25:28.219462 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 59080 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:25:28.221131 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:25:28.225775 systemd-logind[1551]: New session 6 of user core. Sep 5 00:25:28.235210 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:25:28.291626 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:25:28.291943 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:25:28.300028 sudo[1745]: pam_unix(sudo:session): session closed for user root Sep 5 00:25:28.307345 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:25:28.307670 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:25:28.318937 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:25:28.371959 augenrules[1767]: No rules Sep 5 00:25:28.373754 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:25:28.374102 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:25:28.375261 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 5 00:25:28.376970 sshd[1743]: Connection closed by 10.0.0.1 port 59080 Sep 5 00:25:28.377380 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 5 00:25:28.385678 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:59080.service: Deactivated successfully. Sep 5 00:25:28.387553 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:25:28.388307 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:25:28.391087 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:59090.service - OpenSSH per-connection server daemon (10.0.0.1:59090). Sep 5 00:25:28.391726 systemd-logind[1551]: Removed session 6. Sep 5 00:25:28.445161 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 59090 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:25:28.446933 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:25:28.451584 systemd-logind[1551]: New session 7 of user core. Sep 5 00:25:28.463159 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:25:28.516807 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:25:28.517175 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:25:29.175251 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:25:29.196611 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:25:29.792425 dockerd[1800]: time="2025-09-05T00:25:29.792339493Z" level=info msg="Starting up" Sep 5 00:25:29.793499 dockerd[1800]: time="2025-09-05T00:25:29.793465565Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 00:25:29.859272 dockerd[1800]: time="2025-09-05T00:25:29.859209849Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 00:25:30.343467 dockerd[1800]: time="2025-09-05T00:25:30.343396381Z" level=info msg="Loading containers: start." Sep 5 00:25:30.354033 kernel: Initializing XFRM netlink socket Sep 5 00:25:30.699801 systemd-networkd[1473]: docker0: Link UP Sep 5 00:25:30.706127 dockerd[1800]: time="2025-09-05T00:25:30.706069463Z" level=info msg="Loading containers: done." Sep 5 00:25:30.727181 dockerd[1800]: time="2025-09-05T00:25:30.727110292Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:25:30.727373 dockerd[1800]: time="2025-09-05T00:25:30.727237490Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 00:25:30.727373 dockerd[1800]: time="2025-09-05T00:25:30.727340914Z" level=info msg="Initializing buildkit" Sep 5 00:25:30.762608 dockerd[1800]: time="2025-09-05T00:25:30.762546267Z" level=info msg="Completed buildkit initialization" Sep 5 00:25:30.769401 dockerd[1800]: time="2025-09-05T00:25:30.769331462Z" level=info msg="Daemon has completed initialization" Sep 5 00:25:30.769564 dockerd[1800]: time="2025-09-05T00:25:30.769415429Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:25:30.769646 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:25:31.769232 containerd[1576]: time="2025-09-05T00:25:31.769183962Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 00:25:32.446116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616040329.mount: Deactivated successfully. Sep 5 00:25:33.613220 containerd[1576]: time="2025-09-05T00:25:33.613150778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:33.613819 containerd[1576]: time="2025-09-05T00:25:33.613754079Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 5 00:25:33.615128 containerd[1576]: time="2025-09-05T00:25:33.615039520Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:33.619119 containerd[1576]: time="2025-09-05T00:25:33.619074407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:33.621130 containerd[1576]: time="2025-09-05T00:25:33.621089235Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.851851863s" Sep 5 00:25:33.621187 containerd[1576]: time="2025-09-05T00:25:33.621134761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 5 00:25:33.621955 containerd[1576]: time="2025-09-05T00:25:33.621923941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 00:25:35.086118 containerd[1576]: time="2025-09-05T00:25:35.086045050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:35.087066 containerd[1576]: time="2025-09-05T00:25:35.086728823Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 5 00:25:35.088028 containerd[1576]: time="2025-09-05T00:25:35.087982464Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:35.090650 containerd[1576]: time="2025-09-05T00:25:35.090622956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:35.091678 containerd[1576]: time="2025-09-05T00:25:35.091652266Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.469695113s" Sep 5 00:25:35.091724 containerd[1576]: time="2025-09-05T00:25:35.091683294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 5 00:25:35.092342 containerd[1576]: time="2025-09-05T00:25:35.092314989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 00:25:37.398234 containerd[1576]: time="2025-09-05T00:25:37.398149171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:37.399343 containerd[1576]: time="2025-09-05T00:25:37.399271335Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 5 00:25:37.400875 containerd[1576]: time="2025-09-05T00:25:37.400824418Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:37.403669 containerd[1576]: time="2025-09-05T00:25:37.403621323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:37.404578 containerd[1576]: time="2025-09-05T00:25:37.404526821Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.312177959s" Sep 5 00:25:37.404578 containerd[1576]: time="2025-09-05T00:25:37.404567448Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 5 00:25:37.405262 containerd[1576]: time="2025-09-05T00:25:37.405240169Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 00:25:37.937484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:25:37.940225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:25:38.554662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:25:38.572480 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:25:38.700192 kubelet[2090]: E0905 00:25:38.700064 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:25:38.707257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:25:38.707574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:25:38.708037 systemd[1]: kubelet.service: Consumed 394ms CPU time, 109.9M memory peak. Sep 5 00:25:39.160093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034483230.mount: Deactivated successfully. Sep 5 00:25:40.169601 containerd[1576]: time="2025-09-05T00:25:40.169512617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:40.170287 containerd[1576]: time="2025-09-05T00:25:40.170193874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 5 00:25:40.171556 containerd[1576]: time="2025-09-05T00:25:40.171504432Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:40.173695 containerd[1576]: time="2025-09-05T00:25:40.173652400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:40.174214 containerd[1576]: time="2025-09-05T00:25:40.174165122Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.768894506s" Sep 5 00:25:40.174214 containerd[1576]: time="2025-09-05T00:25:40.174212821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 5 00:25:40.174993 containerd[1576]: time="2025-09-05T00:25:40.174965302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:25:41.096961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980778212.mount: Deactivated successfully. Sep 5 00:25:41.926410 containerd[1576]: time="2025-09-05T00:25:41.926338773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:41.928085 containerd[1576]: time="2025-09-05T00:25:41.928039422Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:25:41.929395 containerd[1576]: time="2025-09-05T00:25:41.929334852Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:41.932018 containerd[1576]: time="2025-09-05T00:25:41.931911924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:41.932937 containerd[1576]: time="2025-09-05T00:25:41.932892674Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.757897005s" Sep 5 00:25:41.932937 containerd[1576]: time="2025-09-05T00:25:41.932922750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:25:41.933464 containerd[1576]: time="2025-09-05T00:25:41.933430713Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:25:42.366303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358162457.mount: Deactivated successfully. Sep 5 00:25:42.372704 containerd[1576]: time="2025-09-05T00:25:42.372647571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:25:42.373401 containerd[1576]: time="2025-09-05T00:25:42.373349007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:25:42.374608 containerd[1576]: time="2025-09-05T00:25:42.374564837Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:25:42.376594 containerd[1576]: time="2025-09-05T00:25:42.376556762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:25:42.377195 containerd[1576]: time="2025-09-05T00:25:42.377155325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 443.688194ms" Sep 5 00:25:42.377234 containerd[1576]: time="2025-09-05T00:25:42.377195159Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:25:42.377855 containerd[1576]: time="2025-09-05T00:25:42.377690238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 00:25:43.335906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818907393.mount: Deactivated successfully. Sep 5 00:25:45.331932 containerd[1576]: time="2025-09-05T00:25:45.331807001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:45.332954 containerd[1576]: time="2025-09-05T00:25:45.332903438Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 5 00:25:45.334191 containerd[1576]: time="2025-09-05T00:25:45.334130318Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:45.337110 containerd[1576]: time="2025-09-05T00:25:45.337039745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:25:45.338283 containerd[1576]: time="2025-09-05T00:25:45.338245506Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.96052441s" Sep 5 00:25:45.338387 containerd[1576]: time="2025-09-05T00:25:45.338286863Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 5 00:25:47.853320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:25:47.853493 systemd[1]: kubelet.service: Consumed 394ms CPU time, 109.9M memory peak. Sep 5 00:25:47.855864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:25:47.884679 systemd[1]: Reload requested from client PID 2246 ('systemctl') (unit session-7.scope)... Sep 5 00:25:47.884702 systemd[1]: Reloading... Sep 5 00:25:47.975051 zram_generator::config[2292]: No configuration found. Sep 5 00:25:48.297476 systemd[1]: Reloading finished in 412 ms. Sep 5 00:25:48.366838 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:25:48.366937 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:25:48.367270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:25:48.367314 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.2M memory peak. Sep 5 00:25:48.368879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:25:48.572159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:25:48.587695 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:25:48.637928 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:25:48.637928 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:25:48.637928 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:25:48.638381 kubelet[2337]: I0905 00:25:48.638082 2337 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:25:48.829451 kubelet[2337]: I0905 00:25:48.829092 2337 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:25:48.829451 kubelet[2337]: I0905 00:25:48.829135 2337 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:25:48.830434 kubelet[2337]: I0905 00:25:48.830396 2337 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:25:48.857028 kubelet[2337]: E0905 00:25:48.856939 2337 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:48.857525 kubelet[2337]: I0905 00:25:48.857496 2337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:25:48.866687 kubelet[2337]: I0905 00:25:48.866655 2337 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:25:48.874907 kubelet[2337]: I0905 00:25:48.874843 2337 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:25:48.877222 kubelet[2337]: I0905 00:25:48.877159 2337 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:25:48.877538 kubelet[2337]: I0905 00:25:48.877212 2337 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:25:48.877700 kubelet[2337]: I0905 00:25:48.877552 2337 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:25:48.877700 kubelet[2337]: I0905 00:25:48.877566 2337 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:25:48.877817 kubelet[2337]: I0905 00:25:48.877788 2337 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:25:48.881275 kubelet[2337]: I0905 00:25:48.881245 2337 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:25:48.882984 kubelet[2337]: I0905 00:25:48.882918 2337 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:25:48.883084 kubelet[2337]: I0905 00:25:48.882994 2337 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:25:48.883084 kubelet[2337]: I0905 00:25:48.883031 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:25:48.886800 kubelet[2337]: I0905 00:25:48.886765 2337 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:25:48.887331 kubelet[2337]: I0905 00:25:48.887287 2337 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:25:48.888741 kubelet[2337]: W0905 00:25:48.888670 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:48.888873 kubelet[2337]: E0905 00:25:48.888842 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:48.888983 kubelet[2337]: W0905 00:25:48.888934 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:25:48.890132 kubelet[2337]: W0905 00:25:48.890069 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:48.890210 kubelet[2337]: E0905 00:25:48.890178 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:48.891680 kubelet[2337]: I0905 00:25:48.891633 2337 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:25:48.891735 kubelet[2337]: I0905 00:25:48.891696 2337 server.go:1287] "Started kubelet" Sep 5 00:25:48.894845 kubelet[2337]: I0905 00:25:48.894746 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:25:48.894845 kubelet[2337]: I0905 00:25:48.894737 2337 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:25:48.895109 kubelet[2337]: I0905 00:25:48.894990 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:25:48.895481 kubelet[2337]: I0905 00:25:48.895463 2337 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:25:48.896144 kubelet[2337]: I0905 00:25:48.896112 2337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:25:48.896868 kubelet[2337]: I0905 00:25:48.896835 2337 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:25:48.900412 kubelet[2337]: E0905 00:25:48.898992 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623b3f94b320b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:25:48.891660467 +0000 UTC m=+0.298042156,LastTimestamp:2025-09-05 00:25:48.891660467 +0000 UTC m=+0.298042156,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:25:48.900707 kubelet[2337]: E0905 00:25:48.900571 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:48.900707 kubelet[2337]: I0905 00:25:48.900599 2337 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:25:48.900784 kubelet[2337]: I0905 00:25:48.900766 2337 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:25:48.903031 kubelet[2337]: I0905 00:25:48.901075 2337 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:25:48.903031 kubelet[2337]: W0905 00:25:48.901473 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:48.903031 kubelet[2337]: E0905 00:25:48.901523 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:48.903031 kubelet[2337]: E0905 00:25:48.902025 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Sep 5 00:25:48.903031 kubelet[2337]: E0905 00:25:48.902152 2337 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:25:48.903031 kubelet[2337]: I0905 00:25:48.902245 2337 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:25:48.903031 kubelet[2337]: I0905 00:25:48.902334 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:25:48.903303 kubelet[2337]: I0905 00:25:48.903161 2337 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:25:48.925786 kubelet[2337]: I0905 00:25:48.925698 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:25:48.927609 kubelet[2337]: I0905 00:25:48.927547 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:25:48.927609 kubelet[2337]: I0905 00:25:48.927606 2337 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:25:48.927775 kubelet[2337]: I0905 00:25:48.927650 2337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:25:48.927775 kubelet[2337]: I0905 00:25:48.927660 2337 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:25:48.927775 kubelet[2337]: E0905 00:25:48.927734 2337 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:25:48.928727 kubelet[2337]: W0905 00:25:48.928440 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:48.928727 kubelet[2337]: E0905 00:25:48.928498 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:48.930380 kubelet[2337]: I0905 00:25:48.930351 2337 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:25:48.931095 kubelet[2337]: I0905 00:25:48.931063 2337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:25:48.931157 kubelet[2337]: I0905 00:25:48.931101 2337 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:25:49.001875 kubelet[2337]: E0905 00:25:49.001717 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:49.034544 kubelet[2337]: E0905 00:25:49.033748 2337 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:25:49.104405 kubelet[2337]: E0905 00:25:49.103063 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:49.104840 kubelet[2337]: E0905 00:25:49.104742 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Sep 5 00:25:49.207264 kubelet[2337]: E0905 00:25:49.203962 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:49.234382 kubelet[2337]: I0905 00:25:49.233728 2337 policy_none.go:49] "None policy: Start" Sep 5 00:25:49.234382 kubelet[2337]: I0905 00:25:49.233801 2337 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:25:49.234382 kubelet[2337]: I0905 00:25:49.233844 2337 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:25:49.234977 kubelet[2337]: E0905 00:25:49.234852 2337 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:25:49.288568 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:25:49.308375 kubelet[2337]: E0905 00:25:49.308237 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:49.321675 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:25:49.354603 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:25:49.377955 kubelet[2337]: I0905 00:25:49.377884 2337 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:25:49.378311 kubelet[2337]: I0905 00:25:49.378276 2337 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:25:49.378382 kubelet[2337]: I0905 00:25:49.378312 2337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:25:49.379412 kubelet[2337]: I0905 00:25:49.379385 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:25:49.380900 kubelet[2337]: E0905 00:25:49.380855 2337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:25:49.380997 kubelet[2337]: E0905 00:25:49.380929 2337 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:25:49.495753 kubelet[2337]: I0905 00:25:49.494799 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:25:49.497784 kubelet[2337]: E0905 00:25:49.497702 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 5 00:25:49.505829 kubelet[2337]: E0905 00:25:49.505751 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Sep 5 00:25:49.650151 systemd[1]: Created slice kubepods-burstable-poda5e492a07bd3125a0c43896b8c749f43.slice - libcontainer container kubepods-burstable-poda5e492a07bd3125a0c43896b8c749f43.slice. Sep 5 00:25:49.668806 kubelet[2337]: E0905 00:25:49.668757 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:49.679380 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 5 00:25:49.682789 kubelet[2337]: E0905 00:25:49.682715 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:49.684672 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 5 00:25:49.687953 kubelet[2337]: E0905 00:25:49.687881 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:49.700866 kubelet[2337]: I0905 00:25:49.700810 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:25:49.701465 kubelet[2337]: E0905 00:25:49.701279 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 5 00:25:49.709705 kubelet[2337]: I0905 00:25:49.709586 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:49.709705 kubelet[2337]: I0905 00:25:49.709661 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5e492a07bd3125a0c43896b8c749f43-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5e492a07bd3125a0c43896b8c749f43\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:25:49.709705 kubelet[2337]: I0905 00:25:49.709707 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:49.709705 kubelet[2337]: I0905 00:25:49.709730 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:49.710078 kubelet[2337]: I0905 00:25:49.709756 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:49.710078 kubelet[2337]: I0905 00:25:49.709781 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:49.710078 kubelet[2337]: I0905 00:25:49.709803 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:25:49.710078 kubelet[2337]: I0905 00:25:49.709823 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5e492a07bd3125a0c43896b8c749f43-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5e492a07bd3125a0c43896b8c749f43\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:25:49.710078 kubelet[2337]: I0905 00:25:49.709846 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5e492a07bd3125a0c43896b8c749f43-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a5e492a07bd3125a0c43896b8c749f43\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:25:49.926966 kubelet[2337]: W0905 00:25:49.925224 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:49.926966 kubelet[2337]: E0905 00:25:49.926856 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:49.970213 kubelet[2337]: E0905 00:25:49.970158 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:49.971174 containerd[1576]: time="2025-09-05T00:25:49.971101374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a5e492a07bd3125a0c43896b8c749f43,Namespace:kube-system,Attempt:0,}" Sep 5 00:25:49.983419 kubelet[2337]: E0905 00:25:49.983338 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:49.984054 containerd[1576]: time="2025-09-05T00:25:49.983972462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 5 00:25:49.989349 kubelet[2337]: E0905 00:25:49.989299 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:49.989893 containerd[1576]: time="2025-09-05T00:25:49.989853501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 5 00:25:50.103149 kubelet[2337]: I0905 00:25:50.103096 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:25:50.103610 kubelet[2337]: E0905 00:25:50.103566 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 5 00:25:50.147050 containerd[1576]: time="2025-09-05T00:25:50.146962186Z" level=info msg="connecting to shim be8ff0b8fe8a3ef7e53bb7a4dff8ccea69b143f2db9e56049d2cd95ddb37e13c" address="unix:///run/containerd/s/7bcca05e78accc046a35a494d48b51e813bceac0a0f40128ea4a7306ac84c525" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:25:50.151864 containerd[1576]: time="2025-09-05T00:25:50.151802984Z" level=info msg="connecting to shim 8fe063409d7d85054b0e86ee50cdffa91d7ae9bb8f253818bd2ee0cab1024b39" address="unix:///run/containerd/s/f9ea308b61eb201564bd3e5266a6ed3754788f2b605de97ab8bf6bde96cd934a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:25:50.230102 containerd[1576]: time="2025-09-05T00:25:50.230034607Z" level=info msg="connecting to shim d5d28520d07855b0599237bdb13e8952234a2d2d7e2801c3183150ac8537fc2d" address="unix:///run/containerd/s/23da211816bb0f372908236302b6dd6cbc73d9c259a9ebc2b4949bf78857ec3a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:25:50.246050 kubelet[2337]: W0905 00:25:50.245710 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:50.246050 kubelet[2337]: E0905 00:25:50.245819 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:50.269264 systemd[1]: Started cri-containerd-be8ff0b8fe8a3ef7e53bb7a4dff8ccea69b143f2db9e56049d2cd95ddb37e13c.scope - libcontainer container be8ff0b8fe8a3ef7e53bb7a4dff8ccea69b143f2db9e56049d2cd95ddb37e13c. Sep 5 00:25:50.276431 systemd[1]: Started cri-containerd-8fe063409d7d85054b0e86ee50cdffa91d7ae9bb8f253818bd2ee0cab1024b39.scope - libcontainer container 8fe063409d7d85054b0e86ee50cdffa91d7ae9bb8f253818bd2ee0cab1024b39. Sep 5 00:25:50.277821 systemd[1]: Started cri-containerd-d5d28520d07855b0599237bdb13e8952234a2d2d7e2801c3183150ac8537fc2d.scope - libcontainer container d5d28520d07855b0599237bdb13e8952234a2d2d7e2801c3183150ac8537fc2d. Sep 5 00:25:50.311944 kubelet[2337]: E0905 00:25:50.311861 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Sep 5 00:25:50.370658 containerd[1576]: time="2025-09-05T00:25:50.370603163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5d28520d07855b0599237bdb13e8952234a2d2d7e2801c3183150ac8537fc2d\"" Sep 5 00:25:50.372154 kubelet[2337]: E0905 00:25:50.372122 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:50.375215 containerd[1576]: time="2025-09-05T00:25:50.375183462Z" level=info msg="CreateContainer within sandbox \"d5d28520d07855b0599237bdb13e8952234a2d2d7e2801c3183150ac8537fc2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:25:50.381544 containerd[1576]: time="2025-09-05T00:25:50.381502353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a5e492a07bd3125a0c43896b8c749f43,Namespace:kube-system,Attempt:0,} returns sandbox id \"be8ff0b8fe8a3ef7e53bb7a4dff8ccea69b143f2db9e56049d2cd95ddb37e13c\"" Sep 5 00:25:50.382448 containerd[1576]: time="2025-09-05T00:25:50.382414714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe063409d7d85054b0e86ee50cdffa91d7ae9bb8f253818bd2ee0cab1024b39\"" Sep 5 00:25:50.383299 kubelet[2337]: E0905 00:25:50.382494 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:50.383352 kubelet[2337]: E0905 00:25:50.383321 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:50.384459 containerd[1576]: time="2025-09-05T00:25:50.384417590Z" level=info msg="CreateContainer within sandbox \"be8ff0b8fe8a3ef7e53bb7a4dff8ccea69b143f2db9e56049d2cd95ddb37e13c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:25:50.385457 containerd[1576]: time="2025-09-05T00:25:50.385426742Z" level=info msg="CreateContainer within sandbox \"8fe063409d7d85054b0e86ee50cdffa91d7ae9bb8f253818bd2ee0cab1024b39\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:25:50.392920 containerd[1576]: time="2025-09-05T00:25:50.392886803Z" level=info msg="Container 9901ecb96f35168d202f1a71327363c5dd67fb8ed590b9e571c078aadbf32d83: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:25:50.403491 containerd[1576]: time="2025-09-05T00:25:50.403289221Z" level=info msg="CreateContainer within sandbox \"d5d28520d07855b0599237bdb13e8952234a2d2d7e2801c3183150ac8537fc2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9901ecb96f35168d202f1a71327363c5dd67fb8ed590b9e571c078aadbf32d83\"" Sep 5 00:25:50.404019 containerd[1576]: time="2025-09-05T00:25:50.403967413Z" level=info msg="StartContainer for \"9901ecb96f35168d202f1a71327363c5dd67fb8ed590b9e571c078aadbf32d83\"" Sep 5 00:25:50.405853 containerd[1576]: time="2025-09-05T00:25:50.405490459Z" level=info msg="Container 9035a3a94469d4f944af9a9930c86545e2c189fc57bbe833d71e8b4f15ab8c21: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:25:50.406163 containerd[1576]: time="2025-09-05T00:25:50.406093840Z" level=info msg="connecting to shim 9901ecb96f35168d202f1a71327363c5dd67fb8ed590b9e571c078aadbf32d83" address="unix:///run/containerd/s/23da211816bb0f372908236302b6dd6cbc73d9c259a9ebc2b4949bf78857ec3a" protocol=ttrpc version=3 Sep 5 00:25:50.409045 containerd[1576]: time="2025-09-05T00:25:50.407663644Z" level=info msg="Container f05b0e7f3cc0a0ca2d90331177aed20a600e94d57f001cd93ce6123edaf6a64f: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:25:50.419672 kubelet[2337]: W0905 00:25:50.419575 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:50.419672 kubelet[2337]: E0905 00:25:50.419655 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:50.425472 containerd[1576]: time="2025-09-05T00:25:50.425429231Z" level=info msg="CreateContainer within sandbox \"8fe063409d7d85054b0e86ee50cdffa91d7ae9bb8f253818bd2ee0cab1024b39\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f05b0e7f3cc0a0ca2d90331177aed20a600e94d57f001cd93ce6123edaf6a64f\"" Sep 5 00:25:50.425663 containerd[1576]: time="2025-09-05T00:25:50.425614298Z" level=info msg="CreateContainer within sandbox \"be8ff0b8fe8a3ef7e53bb7a4dff8ccea69b143f2db9e56049d2cd95ddb37e13c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9035a3a94469d4f944af9a9930c86545e2c189fc57bbe833d71e8b4f15ab8c21\"" Sep 5 00:25:50.430607 containerd[1576]: time="2025-09-05T00:25:50.430565534Z" level=info msg="StartContainer for \"9035a3a94469d4f944af9a9930c86545e2c189fc57bbe833d71e8b4f15ab8c21\"" Sep 5 00:25:50.430607 containerd[1576]: time="2025-09-05T00:25:50.430593536Z" level=info msg="StartContainer for \"f05b0e7f3cc0a0ca2d90331177aed20a600e94d57f001cd93ce6123edaf6a64f\"" Sep 5 00:25:50.431794 containerd[1576]: time="2025-09-05T00:25:50.431765263Z" level=info msg="connecting to shim f05b0e7f3cc0a0ca2d90331177aed20a600e94d57f001cd93ce6123edaf6a64f" address="unix:///run/containerd/s/f9ea308b61eb201564bd3e5266a6ed3754788f2b605de97ab8bf6bde96cd934a" protocol=ttrpc version=3 Sep 5 00:25:50.431925 containerd[1576]: time="2025-09-05T00:25:50.431873096Z" level=info msg="connecting to shim 9035a3a94469d4f944af9a9930c86545e2c189fc57bbe833d71e8b4f15ab8c21" address="unix:///run/containerd/s/7bcca05e78accc046a35a494d48b51e813bceac0a0f40128ea4a7306ac84c525" protocol=ttrpc version=3 Sep 5 00:25:50.435244 systemd[1]: Started cri-containerd-9901ecb96f35168d202f1a71327363c5dd67fb8ed590b9e571c078aadbf32d83.scope - libcontainer container 9901ecb96f35168d202f1a71327363c5dd67fb8ed590b9e571c078aadbf32d83. Sep 5 00:25:50.466850 kubelet[2337]: W0905 00:25:50.466724 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:25:50.467040 kubelet[2337]: E0905 00:25:50.466871 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:25:50.475297 systemd[1]: Started cri-containerd-f05b0e7f3cc0a0ca2d90331177aed20a600e94d57f001cd93ce6123edaf6a64f.scope - libcontainer container f05b0e7f3cc0a0ca2d90331177aed20a600e94d57f001cd93ce6123edaf6a64f. Sep 5 00:25:50.484203 systemd[1]: Started cri-containerd-9035a3a94469d4f944af9a9930c86545e2c189fc57bbe833d71e8b4f15ab8c21.scope - libcontainer container 9035a3a94469d4f944af9a9930c86545e2c189fc57bbe833d71e8b4f15ab8c21. Sep 5 00:25:50.526850 containerd[1576]: time="2025-09-05T00:25:50.526808463Z" level=info msg="StartContainer for \"9901ecb96f35168d202f1a71327363c5dd67fb8ed590b9e571c078aadbf32d83\" returns successfully" Sep 5 00:25:50.551201 containerd[1576]: time="2025-09-05T00:25:50.551135525Z" level=info msg="StartContainer for \"9035a3a94469d4f944af9a9930c86545e2c189fc57bbe833d71e8b4f15ab8c21\" returns successfully" Sep 5 00:25:50.564212 containerd[1576]: time="2025-09-05T00:25:50.564144121Z" level=info msg="StartContainer for \"f05b0e7f3cc0a0ca2d90331177aed20a600e94d57f001cd93ce6123edaf6a64f\" returns successfully" Sep 5 00:25:50.906388 kubelet[2337]: I0905 00:25:50.906228 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:25:50.942639 kubelet[2337]: E0905 00:25:50.942589 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:50.942829 kubelet[2337]: E0905 00:25:50.942804 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:50.947754 kubelet[2337]: E0905 00:25:50.947720 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:50.947907 kubelet[2337]: E0905 00:25:50.947883 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:50.952740 kubelet[2337]: E0905 00:25:50.952710 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:50.952843 kubelet[2337]: E0905 00:25:50.952823 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:51.954938 kubelet[2337]: E0905 00:25:51.954430 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:51.954938 kubelet[2337]: E0905 00:25:51.954566 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:51.954938 kubelet[2337]: E0905 00:25:51.954761 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:25:51.954938 kubelet[2337]: E0905 00:25:51.954843 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:52.757435 kubelet[2337]: E0905 00:25:52.756886 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:25:52.855840 kubelet[2337]: I0905 00:25:52.855767 2337 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:25:52.855840 kubelet[2337]: E0905 00:25:52.855806 2337 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:25:52.869081 kubelet[2337]: E0905 00:25:52.868965 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:52.969180 kubelet[2337]: E0905 00:25:52.969120 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:53.069859 kubelet[2337]: E0905 00:25:53.069470 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:53.169844 kubelet[2337]: E0905 00:25:53.169783 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:53.270925 kubelet[2337]: E0905 00:25:53.270871 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:25:53.402552 kubelet[2337]: I0905 00:25:53.402394 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:53.407842 kubelet[2337]: E0905 00:25:53.407805 2337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:53.407842 kubelet[2337]: I0905 00:25:53.407835 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:25:53.409287 kubelet[2337]: E0905 00:25:53.409259 2337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:25:53.409287 kubelet[2337]: I0905 00:25:53.409277 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:25:53.410635 kubelet[2337]: E0905 00:25:53.410608 2337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:25:53.493224 kubelet[2337]: I0905 00:25:53.493179 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:25:53.495539 kubelet[2337]: E0905 00:25:53.495510 2337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:25:53.495675 kubelet[2337]: E0905 00:25:53.495652 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:53.891056 kubelet[2337]: I0905 00:25:53.890995 2337 apiserver.go:52] "Watching apiserver" Sep 5 00:25:53.901211 kubelet[2337]: I0905 00:25:53.901160 2337 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:25:55.599486 kubelet[2337]: I0905 00:25:55.599429 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:25:55.659393 kubelet[2337]: E0905 00:25:55.659331 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:55.960819 kubelet[2337]: E0905 00:25:55.960767 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:58.606192 kubelet[2337]: I0905 00:25:58.606136 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:25:58.825362 kubelet[2337]: E0905 00:25:58.825309 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:58.964966 kubelet[2337]: E0905 00:25:58.964920 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:25:59.484845 kubelet[2337]: I0905 00:25:59.484736 2337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.48470546 podStartE2EDuration="4.48470546s" podCreationTimestamp="2025-09-05 00:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:25:59.175241869 +0000 UTC m=+10.581623548" watchObservedRunningTime="2025-09-05 00:25:59.48470546 +0000 UTC m=+10.891087139" Sep 5 00:26:01.522792 systemd[1]: Reload requested from client PID 2614 ('systemctl') (unit session-7.scope)... Sep 5 00:26:01.522828 systemd[1]: Reloading... Sep 5 00:26:01.630069 zram_generator::config[2657]: No configuration found. Sep 5 00:26:01.942322 systemd[1]: Reloading finished in 419 ms. Sep 5 00:26:01.979158 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:26:02.002354 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:26:02.002690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:26:02.002754 systemd[1]: kubelet.service: Consumed 1.143s CPU time, 135.8M memory peak. Sep 5 00:26:02.004678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:26:02.215044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:26:02.235449 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:26:02.289585 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:26:02.290075 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:26:02.290075 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:26:02.290188 kubelet[2702]: I0905 00:26:02.290141 2702 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:26:02.298598 kubelet[2702]: I0905 00:26:02.298529 2702 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:26:02.298598 kubelet[2702]: I0905 00:26:02.298572 2702 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:26:02.298882 kubelet[2702]: I0905 00:26:02.298868 2702 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:26:02.300240 kubelet[2702]: I0905 00:26:02.300206 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:26:02.307303 kubelet[2702]: I0905 00:26:02.307065 2702 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:26:02.313722 kubelet[2702]: I0905 00:26:02.313492 2702 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:26:02.321342 kubelet[2702]: I0905 00:26:02.321291 2702 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:26:02.321722 kubelet[2702]: I0905 00:26:02.321662 2702 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:26:02.322026 kubelet[2702]: I0905 00:26:02.321708 2702 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:26:02.322168 kubelet[2702]: I0905 00:26:02.322057 2702 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:26:02.322168 kubelet[2702]: I0905 00:26:02.322074 2702 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:26:02.322168 kubelet[2702]: I0905 00:26:02.322167 2702 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:26:02.322499 kubelet[2702]: I0905 00:26:02.322463 2702 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:26:02.322541 kubelet[2702]: I0905 00:26:02.322511 2702 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:26:02.322576 kubelet[2702]: I0905 00:26:02.322549 2702 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:26:02.322576 kubelet[2702]: I0905 00:26:02.322570 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:26:02.328075 kubelet[2702]: I0905 00:26:02.325508 2702 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:26:02.328075 kubelet[2702]: I0905 00:26:02.326430 2702 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:26:02.328346 kubelet[2702]: I0905 00:26:02.328318 2702 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:26:02.328593 kubelet[2702]: I0905 00:26:02.328496 2702 server.go:1287] "Started kubelet" Sep 5 00:26:02.331118 kubelet[2702]: I0905 00:26:02.330966 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:26:02.334387 kubelet[2702]: I0905 00:26:02.334345 2702 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:26:02.335841 kubelet[2702]: I0905 00:26:02.335781 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:26:02.342036 kubelet[2702]: I0905 00:26:02.340290 2702 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:26:02.342036 kubelet[2702]: I0905 00:26:02.341424 2702 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:26:02.342761 kubelet[2702]: I0905 00:26:02.342738 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:26:02.345196 kubelet[2702]: I0905 00:26:02.344853 2702 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:26:02.345473 kubelet[2702]: E0905 00:26:02.345433 2702 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:26:02.347236 kubelet[2702]: I0905 00:26:02.347170 2702 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:26:02.347443 kubelet[2702]: I0905 00:26:02.347427 2702 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:26:02.347566 kubelet[2702]: I0905 00:26:02.347524 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:26:02.348099 kubelet[2702]: I0905 00:26:02.348057 2702 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:26:02.348655 kubelet[2702]: E0905 00:26:02.348609 2702 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:26:02.348791 kubelet[2702]: I0905 00:26:02.348772 2702 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:26:02.352138 kubelet[2702]: I0905 00:26:02.352070 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:26:02.353525 kubelet[2702]: I0905 00:26:02.353498 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:26:02.353568 kubelet[2702]: I0905 00:26:02.353530 2702 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:26:02.353568 kubelet[2702]: I0905 00:26:02.353561 2702 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:26:02.353568 kubelet[2702]: I0905 00:26:02.353568 2702 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:26:02.353647 kubelet[2702]: E0905 00:26:02.353623 2702 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:26:02.401317 kubelet[2702]: I0905 00:26:02.401268 2702 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:26:02.401317 kubelet[2702]: I0905 00:26:02.401295 2702 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:26:02.401317 kubelet[2702]: I0905 00:26:02.401324 2702 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:26:02.401577 kubelet[2702]: I0905 00:26:02.401553 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:26:02.401622 kubelet[2702]: I0905 00:26:02.401573 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:26:02.401622 kubelet[2702]: I0905 00:26:02.401598 2702 policy_none.go:49] "None policy: Start" Sep 5 00:26:02.401622 kubelet[2702]: I0905 00:26:02.401610 2702 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:26:02.401622 kubelet[2702]: I0905 00:26:02.401625 2702 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:26:02.401756 kubelet[2702]: I0905 00:26:02.401735 2702 state_mem.go:75] "Updated machine memory state" Sep 5 00:26:02.406583 kubelet[2702]: I0905 00:26:02.406499 2702 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:26:02.406796 kubelet[2702]: I0905 00:26:02.406763 2702 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:26:02.406860 kubelet[2702]: I0905 00:26:02.406782 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:26:02.407213 kubelet[2702]: I0905 00:26:02.407194 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:26:02.409086 kubelet[2702]: E0905 00:26:02.408741 2702 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:26:02.455105 kubelet[2702]: I0905 00:26:02.455036 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:26:02.455304 kubelet[2702]: I0905 00:26:02.455132 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:26:02.455304 kubelet[2702]: I0905 00:26:02.455036 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:26:02.511367 kubelet[2702]: E0905 00:26:02.510669 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:26:02.512046 kubelet[2702]: E0905 00:26:02.511986 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:26:02.515116 kubelet[2702]: I0905 00:26:02.515097 2702 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:26:02.549790 kubelet[2702]: I0905 00:26:02.549567 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:26:02.549790 kubelet[2702]: I0905 00:26:02.549620 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:26:02.549790 kubelet[2702]: I0905 00:26:02.549667 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:26:02.549790 kubelet[2702]: I0905 00:26:02.549695 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:26:02.549790 kubelet[2702]: I0905 00:26:02.549716 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5e492a07bd3125a0c43896b8c749f43-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5e492a07bd3125a0c43896b8c749f43\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:26:02.550200 kubelet[2702]: I0905 00:26:02.549738 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5e492a07bd3125a0c43896b8c749f43-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a5e492a07bd3125a0c43896b8c749f43\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:26:02.550370 kubelet[2702]: I0905 00:26:02.550326 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:26:02.550411 kubelet[2702]: I0905 00:26:02.550380 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:26:02.550488 kubelet[2702]: I0905 00:26:02.550416 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5e492a07bd3125a0c43896b8c749f43-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a5e492a07bd3125a0c43896b8c749f43\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:26:02.557757 kubelet[2702]: I0905 00:26:02.557707 2702 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:26:02.557975 kubelet[2702]: I0905 00:26:02.557821 2702 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:26:02.811554 kubelet[2702]: E0905 00:26:02.811339 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:02.811554 kubelet[2702]: E0905 00:26:02.811392 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:02.812568 kubelet[2702]: E0905 00:26:02.812543 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:03.324361 kubelet[2702]: I0905 00:26:03.324304 2702 apiserver.go:52] "Watching apiserver" Sep 5 00:26:03.347996 kubelet[2702]: I0905 00:26:03.347935 2702 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:26:03.377964 kubelet[2702]: I0905 00:26:03.377902 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:26:03.378148 kubelet[2702]: E0905 00:26:03.378120 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:03.378634 kubelet[2702]: I0905 00:26:03.377915 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:26:03.553541 kubelet[2702]: E0905 00:26:03.553460 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:26:03.553541 kubelet[2702]: E0905 00:26:03.553515 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:26:03.554490 kubelet[2702]: E0905 00:26:03.553714 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:03.554608 kubelet[2702]: E0905 00:26:03.554577 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:04.062561 kubelet[2702]: I0905 00:26:04.061816 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.061792051 podStartE2EDuration="2.061792051s" podCreationTimestamp="2025-09-05 00:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:26:04.061712999 +0000 UTC m=+1.817747796" watchObservedRunningTime="2025-09-05 00:26:04.061792051 +0000 UTC m=+1.817826828" Sep 5 00:26:04.379059 kubelet[2702]: E0905 00:26:04.378914 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:04.379059 kubelet[2702]: E0905 00:26:04.378962 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:05.472720 kubelet[2702]: I0905 00:26:05.472679 2702 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:26:05.473258 kubelet[2702]: I0905 00:26:05.473216 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:26:05.473302 containerd[1576]: time="2025-09-05T00:26:05.473032720Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:26:06.755773 kubelet[2702]: E0905 00:26:06.755682 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:06.892559 systemd[1]: Created slice kubepods-besteffort-pod753378a4_8254_4c2b_b852_0c5c3c21d238.slice - libcontainer container kubepods-besteffort-pod753378a4_8254_4c2b_b852_0c5c3c21d238.slice. Sep 5 00:26:06.933484 systemd[1]: Created slice kubepods-besteffort-podbef0647c_6707_41f1_a9e9_d59975bcd558.slice - libcontainer container kubepods-besteffort-podbef0647c_6707_41f1_a9e9_d59975bcd558.slice. Sep 5 00:26:06.978055 kubelet[2702]: I0905 00:26:06.977464 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/753378a4-8254-4c2b-b852-0c5c3c21d238-lib-modules\") pod \"kube-proxy-62mng\" (UID: \"753378a4-8254-4c2b-b852-0c5c3c21d238\") " pod="kube-system/kube-proxy-62mng" Sep 5 00:26:06.978055 kubelet[2702]: I0905 00:26:06.977530 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxkwq\" (UniqueName: \"kubernetes.io/projected/753378a4-8254-4c2b-b852-0c5c3c21d238-kube-api-access-rxkwq\") pod \"kube-proxy-62mng\" (UID: \"753378a4-8254-4c2b-b852-0c5c3c21d238\") " pod="kube-system/kube-proxy-62mng" Sep 5 00:26:06.978055 kubelet[2702]: I0905 00:26:06.977553 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bef0647c-6707-41f1-a9e9-d59975bcd558-var-lib-calico\") pod \"tigera-operator-755d956888-qnmq2\" (UID: \"bef0647c-6707-41f1-a9e9-d59975bcd558\") " pod="tigera-operator/tigera-operator-755d956888-qnmq2" Sep 5 00:26:06.978055 kubelet[2702]: I0905 00:26:06.977571 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkcjq\" (UniqueName: \"kubernetes.io/projected/bef0647c-6707-41f1-a9e9-d59975bcd558-kube-api-access-qkcjq\") pod \"tigera-operator-755d956888-qnmq2\" (UID: \"bef0647c-6707-41f1-a9e9-d59975bcd558\") " pod="tigera-operator/tigera-operator-755d956888-qnmq2" Sep 5 00:26:06.978055 kubelet[2702]: I0905 00:26:06.977586 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/753378a4-8254-4c2b-b852-0c5c3c21d238-xtables-lock\") pod \"kube-proxy-62mng\" (UID: \"753378a4-8254-4c2b-b852-0c5c3c21d238\") " pod="kube-system/kube-proxy-62mng" Sep 5 00:26:06.978348 kubelet[2702]: I0905 00:26:06.977602 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/753378a4-8254-4c2b-b852-0c5c3c21d238-kube-proxy\") pod \"kube-proxy-62mng\" (UID: \"753378a4-8254-4c2b-b852-0c5c3c21d238\") " pod="kube-system/kube-proxy-62mng" Sep 5 00:26:07.239293 containerd[1576]: time="2025-09-05T00:26:07.239187994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qnmq2,Uid:bef0647c-6707-41f1-a9e9-d59975bcd558,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:26:07.385085 kubelet[2702]: E0905 00:26:07.384720 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:07.502642 kubelet[2702]: E0905 00:26:07.502443 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:07.503639 containerd[1576]: time="2025-09-05T00:26:07.503394814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62mng,Uid:753378a4-8254-4c2b-b852-0c5c3c21d238,Namespace:kube-system,Attempt:0,}" Sep 5 00:26:07.721386 containerd[1576]: time="2025-09-05T00:26:07.721291343Z" level=info msg="connecting to shim ecab5f18f468c7ab9fcbfe050f169b75ee65a45707198a338572bda1509a1d2c" address="unix:///run/containerd/s/c486a6571fd91e864c5a0893d450e20c70aa16b33d8cc8c31c915fbf47c0485a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:07.761269 systemd[1]: Started cri-containerd-ecab5f18f468c7ab9fcbfe050f169b75ee65a45707198a338572bda1509a1d2c.scope - libcontainer container ecab5f18f468c7ab9fcbfe050f169b75ee65a45707198a338572bda1509a1d2c. Sep 5 00:26:08.042137 containerd[1576]: time="2025-09-05T00:26:08.041781974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qnmq2,Uid:bef0647c-6707-41f1-a9e9-d59975bcd558,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ecab5f18f468c7ab9fcbfe050f169b75ee65a45707198a338572bda1509a1d2c\"" Sep 5 00:26:08.044291 containerd[1576]: time="2025-09-05T00:26:08.044245630Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:26:08.231085 containerd[1576]: time="2025-09-05T00:26:08.230936321Z" level=info msg="connecting to shim 4d1ae129ed2d5d32a7cd1c0a37a84c3d5619ec82bf48d49363571d4d61d99e2b" address="unix:///run/containerd/s/0adbdeab84fd7417543bd287c850a1355fa1648ee946ba475b6c6bf20237f1b9" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:08.264395 systemd[1]: Started cri-containerd-4d1ae129ed2d5d32a7cd1c0a37a84c3d5619ec82bf48d49363571d4d61d99e2b.scope - libcontainer container 4d1ae129ed2d5d32a7cd1c0a37a84c3d5619ec82bf48d49363571d4d61d99e2b. Sep 5 00:26:08.352678 containerd[1576]: time="2025-09-05T00:26:08.352497942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62mng,Uid:753378a4-8254-4c2b-b852-0c5c3c21d238,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1ae129ed2d5d32a7cd1c0a37a84c3d5619ec82bf48d49363571d4d61d99e2b\"" Sep 5 00:26:08.353358 kubelet[2702]: E0905 00:26:08.353315 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:08.355676 containerd[1576]: time="2025-09-05T00:26:08.355622105Z" level=info msg="CreateContainer within sandbox \"4d1ae129ed2d5d32a7cd1c0a37a84c3d5619ec82bf48d49363571d4d61d99e2b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:26:08.980044 containerd[1576]: time="2025-09-05T00:26:08.977981955Z" level=info msg="Container c7881754a491cd875b424d197f0481ffeaf3b8c4c9ac936e50bea2b64a006e77: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:08.981680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965484619.mount: Deactivated successfully. Sep 5 00:26:09.315303 containerd[1576]: time="2025-09-05T00:26:09.315151959Z" level=info msg="CreateContainer within sandbox \"4d1ae129ed2d5d32a7cd1c0a37a84c3d5619ec82bf48d49363571d4d61d99e2b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7881754a491cd875b424d197f0481ffeaf3b8c4c9ac936e50bea2b64a006e77\"" Sep 5 00:26:09.316120 containerd[1576]: time="2025-09-05T00:26:09.315936229Z" level=info msg="StartContainer for \"c7881754a491cd875b424d197f0481ffeaf3b8c4c9ac936e50bea2b64a006e77\"" Sep 5 00:26:09.317962 containerd[1576]: time="2025-09-05T00:26:09.317927463Z" level=info msg="connecting to shim c7881754a491cd875b424d197f0481ffeaf3b8c4c9ac936e50bea2b64a006e77" address="unix:///run/containerd/s/0adbdeab84fd7417543bd287c850a1355fa1648ee946ba475b6c6bf20237f1b9" protocol=ttrpc version=3 Sep 5 00:26:09.344328 systemd[1]: Started cri-containerd-c7881754a491cd875b424d197f0481ffeaf3b8c4c9ac936e50bea2b64a006e77.scope - libcontainer container c7881754a491cd875b424d197f0481ffeaf3b8c4c9ac936e50bea2b64a006e77. Sep 5 00:26:09.525692 containerd[1576]: time="2025-09-05T00:26:09.525638499Z" level=info msg="StartContainer for \"c7881754a491cd875b424d197f0481ffeaf3b8c4c9ac936e50bea2b64a006e77\" returns successfully" Sep 5 00:26:09.587261 update_engine[1553]: I20250905 00:26:09.587093 1553 update_attempter.cc:509] Updating boot flags... Sep 5 00:26:10.529600 kubelet[2702]: E0905 00:26:10.529563 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:11.476882 kubelet[2702]: E0905 00:26:11.476839 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:11.531297 kubelet[2702]: E0905 00:26:11.531245 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:11.531885 kubelet[2702]: E0905 00:26:11.531391 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:11.539279 kubelet[2702]: I0905 00:26:11.539199 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-62mng" podStartSLOduration=5.539175133 podStartE2EDuration="5.539175133s" podCreationTimestamp="2025-09-05 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:26:10.625498691 +0000 UTC m=+8.381533468" watchObservedRunningTime="2025-09-05 00:26:11.539175133 +0000 UTC m=+9.295209910" Sep 5 00:26:11.635807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239606135.mount: Deactivated successfully. Sep 5 00:26:12.014560 kubelet[2702]: E0905 00:26:12.014510 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:12.532503 kubelet[2702]: E0905 00:26:12.532452 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:13.338667 containerd[1576]: time="2025-09-05T00:26:13.338577373Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:13.339290 containerd[1576]: time="2025-09-05T00:26:13.339216934Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:26:13.342032 containerd[1576]: time="2025-09-05T00:26:13.340505656Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:13.343211 containerd[1576]: time="2025-09-05T00:26:13.343149535Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:13.343941 containerd[1576]: time="2025-09-05T00:26:13.343896070Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.299590185s" Sep 5 00:26:13.343941 containerd[1576]: time="2025-09-05T00:26:13.343929362Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:26:13.347214 containerd[1576]: time="2025-09-05T00:26:13.347169572Z" level=info msg="CreateContainer within sandbox \"ecab5f18f468c7ab9fcbfe050f169b75ee65a45707198a338572bda1509a1d2c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:26:13.358460 containerd[1576]: time="2025-09-05T00:26:13.358406090Z" level=info msg="Container 51e8f2f496f898995d180985a20e476fa24d2f0cf275a36fb8dc3503b0bcc892: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:13.368275 containerd[1576]: time="2025-09-05T00:26:13.368203552Z" level=info msg="CreateContainer within sandbox \"ecab5f18f468c7ab9fcbfe050f169b75ee65a45707198a338572bda1509a1d2c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"51e8f2f496f898995d180985a20e476fa24d2f0cf275a36fb8dc3503b0bcc892\"" Sep 5 00:26:13.369112 containerd[1576]: time="2025-09-05T00:26:13.369039666Z" level=info msg="StartContainer for \"51e8f2f496f898995d180985a20e476fa24d2f0cf275a36fb8dc3503b0bcc892\"" Sep 5 00:26:13.370305 containerd[1576]: time="2025-09-05T00:26:13.370276299Z" level=info msg="connecting to shim 51e8f2f496f898995d180985a20e476fa24d2f0cf275a36fb8dc3503b0bcc892" address="unix:///run/containerd/s/c486a6571fd91e864c5a0893d450e20c70aa16b33d8cc8c31c915fbf47c0485a" protocol=ttrpc version=3 Sep 5 00:26:13.441367 systemd[1]: Started cri-containerd-51e8f2f496f898995d180985a20e476fa24d2f0cf275a36fb8dc3503b0bcc892.scope - libcontainer container 51e8f2f496f898995d180985a20e476fa24d2f0cf275a36fb8dc3503b0bcc892. Sep 5 00:26:13.487299 containerd[1576]: time="2025-09-05T00:26:13.487230400Z" level=info msg="StartContainer for \"51e8f2f496f898995d180985a20e476fa24d2f0cf275a36fb8dc3503b0bcc892\" returns successfully" Sep 5 00:26:13.546390 kubelet[2702]: I0905 00:26:13.546332 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-qnmq2" podStartSLOduration=2.244713289 podStartE2EDuration="7.546312767s" podCreationTimestamp="2025-09-05 00:26:06 +0000 UTC" firstStartedPulling="2025-09-05 00:26:08.043509129 +0000 UTC m=+5.799543906" lastFinishedPulling="2025-09-05 00:26:13.345108607 +0000 UTC m=+11.101143384" observedRunningTime="2025-09-05 00:26:13.546258003 +0000 UTC m=+11.302292780" watchObservedRunningTime="2025-09-05 00:26:13.546312767 +0000 UTC m=+11.302347544" Sep 5 00:26:20.703723 sudo[1780]: pam_unix(sudo:session): session closed for user root Sep 5 00:26:20.706231 sshd[1779]: Connection closed by 10.0.0.1 port 59090 Sep 5 00:26:20.707098 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Sep 5 00:26:20.722323 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:59090.service: Deactivated successfully. Sep 5 00:26:20.725932 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:26:20.726285 systemd[1]: session-7.scope: Consumed 5.177s CPU time, 227.9M memory peak. Sep 5 00:26:20.727853 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:26:20.729596 systemd-logind[1551]: Removed session 7. Sep 5 00:26:23.499700 systemd[1]: Created slice kubepods-besteffort-pod58d9f57c_95b6_4cc3_a7ff_ea99123ecfb0.slice - libcontainer container kubepods-besteffort-pod58d9f57c_95b6_4cc3_a7ff_ea99123ecfb0.slice. Sep 5 00:26:23.589824 kubelet[2702]: I0905 00:26:23.589751 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0-typha-certs\") pod \"calico-typha-55bf4cf76-24k4n\" (UID: \"58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0\") " pod="calico-system/calico-typha-55bf4cf76-24k4n" Sep 5 00:26:23.589824 kubelet[2702]: I0905 00:26:23.589807 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgq5\" (UniqueName: \"kubernetes.io/projected/58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0-kube-api-access-hfgq5\") pod \"calico-typha-55bf4cf76-24k4n\" (UID: \"58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0\") " pod="calico-system/calico-typha-55bf4cf76-24k4n" Sep 5 00:26:23.589824 kubelet[2702]: I0905 00:26:23.589843 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0-tigera-ca-bundle\") pod \"calico-typha-55bf4cf76-24k4n\" (UID: \"58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0\") " pod="calico-system/calico-typha-55bf4cf76-24k4n" Sep 5 00:26:24.085408 systemd[1]: Created slice kubepods-besteffort-pod92667fcc_ce8d_4a59_8ad1_48353195d636.slice - libcontainer container kubepods-besteffort-pod92667fcc_ce8d_4a59_8ad1_48353195d636.slice. Sep 5 00:26:24.109306 kubelet[2702]: E0905 00:26:24.109256 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:24.109824 containerd[1576]: time="2025-09-05T00:26:24.109779299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf4cf76-24k4n,Uid:58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:24.182722 kubelet[2702]: E0905 00:26:24.182070 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:24.193879 kubelet[2702]: I0905 00:26:24.193777 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-cni-bin-dir\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194082 kubelet[2702]: I0905 00:26:24.193899 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zkr\" (UniqueName: \"kubernetes.io/projected/92667fcc-ce8d-4a59-8ad1-48353195d636-kube-api-access-85zkr\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194082 kubelet[2702]: I0905 00:26:24.193918 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-lib-modules\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194082 kubelet[2702]: I0905 00:26:24.194074 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-var-lib-calico\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194450 kubelet[2702]: I0905 00:26:24.194215 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/92667fcc-ce8d-4a59-8ad1-48353195d636-node-certs\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194450 kubelet[2702]: I0905 00:26:24.194240 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-flexvol-driver-host\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194450 kubelet[2702]: I0905 00:26:24.194289 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92667fcc-ce8d-4a59-8ad1-48353195d636-tigera-ca-bundle\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194450 kubelet[2702]: I0905 00:26:24.194306 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-cni-net-dir\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194450 kubelet[2702]: I0905 00:26:24.194319 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-xtables-lock\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194611 kubelet[2702]: I0905 00:26:24.194466 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-cni-log-dir\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194611 kubelet[2702]: I0905 00:26:24.194486 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-policysync\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.194684 kubelet[2702]: I0905 00:26:24.194622 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/92667fcc-ce8d-4a59-8ad1-48353195d636-var-run-calico\") pod \"calico-node-fr72g\" (UID: \"92667fcc-ce8d-4a59-8ad1-48353195d636\") " pod="calico-system/calico-node-fr72g" Sep 5 00:26:24.202722 containerd[1576]: time="2025-09-05T00:26:24.201520826Z" level=info msg="connecting to shim b7d611a69adfb9e334317960856a63b347f2129e39c5e517caf437503e964290" address="unix:///run/containerd/s/3a9c8984fc5786b5d02d6f6cb288ddba114485a725c61f11e1b670d5d2209f0b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:24.257328 systemd[1]: Started cri-containerd-b7d611a69adfb9e334317960856a63b347f2129e39c5e517caf437503e964290.scope - libcontainer container b7d611a69adfb9e334317960856a63b347f2129e39c5e517caf437503e964290. Sep 5 00:26:24.295300 kubelet[2702]: I0905 00:26:24.295223 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb-kubelet-dir\") pod \"csi-node-driver-qk2wl\" (UID: \"4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb\") " pod="calico-system/csi-node-driver-qk2wl" Sep 5 00:26:24.295300 kubelet[2702]: I0905 00:26:24.295288 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb-varrun\") pod \"csi-node-driver-qk2wl\" (UID: \"4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb\") " pod="calico-system/csi-node-driver-qk2wl" Sep 5 00:26:24.295592 kubelet[2702]: I0905 00:26:24.295323 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxwh4\" (UniqueName: \"kubernetes.io/projected/4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb-kube-api-access-jxwh4\") pod \"csi-node-driver-qk2wl\" (UID: \"4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb\") " pod="calico-system/csi-node-driver-qk2wl" Sep 5 00:26:24.295592 kubelet[2702]: I0905 00:26:24.295357 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb-socket-dir\") pod \"csi-node-driver-qk2wl\" (UID: \"4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb\") " pod="calico-system/csi-node-driver-qk2wl" Sep 5 00:26:24.295592 kubelet[2702]: I0905 00:26:24.295405 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb-registration-dir\") pod \"csi-node-driver-qk2wl\" (UID: \"4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb\") " pod="calico-system/csi-node-driver-qk2wl" Sep 5 00:26:24.306571 kubelet[2702]: E0905 00:26:24.306521 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.306571 kubelet[2702]: W0905 00:26:24.306549 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.306786 kubelet[2702]: E0905 00:26:24.306588 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.309158 kubelet[2702]: E0905 00:26:24.309083 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.309158 kubelet[2702]: W0905 00:26:24.309104 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.309158 kubelet[2702]: E0905 00:26:24.309121 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.341297 containerd[1576]: time="2025-09-05T00:26:24.341131967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf4cf76-24k4n,Uid:58d9f57c-95b6-4cc3-a7ff-ea99123ecfb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7d611a69adfb9e334317960856a63b347f2129e39c5e517caf437503e964290\"" Sep 5 00:26:24.342484 kubelet[2702]: E0905 00:26:24.342364 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:24.344649 containerd[1576]: time="2025-09-05T00:26:24.344568404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:26:24.389518 containerd[1576]: time="2025-09-05T00:26:24.389453103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fr72g,Uid:92667fcc-ce8d-4a59-8ad1-48353195d636,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:24.396025 kubelet[2702]: E0905 00:26:24.395977 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.396025 kubelet[2702]: W0905 00:26:24.395998 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.396143 kubelet[2702]: E0905 00:26:24.396047 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.396375 kubelet[2702]: E0905 00:26:24.396336 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.396375 kubelet[2702]: W0905 00:26:24.396357 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.396375 kubelet[2702]: E0905 00:26:24.396371 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.396795 kubelet[2702]: E0905 00:26:24.396768 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.396795 kubelet[2702]: W0905 00:26:24.396791 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.396879 kubelet[2702]: E0905 00:26:24.396823 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.397073 kubelet[2702]: E0905 00:26:24.397054 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.397073 kubelet[2702]: W0905 00:26:24.397066 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.397174 kubelet[2702]: E0905 00:26:24.397085 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.397314 kubelet[2702]: E0905 00:26:24.397296 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.397314 kubelet[2702]: W0905 00:26:24.397307 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.397405 kubelet[2702]: E0905 00:26:24.397330 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.397595 kubelet[2702]: E0905 00:26:24.397577 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.397595 kubelet[2702]: W0905 00:26:24.397589 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.397710 kubelet[2702]: E0905 00:26:24.397618 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.397802 kubelet[2702]: E0905 00:26:24.397783 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.397802 kubelet[2702]: W0905 00:26:24.397796 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.397960 kubelet[2702]: E0905 00:26:24.397826 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.398072 kubelet[2702]: E0905 00:26:24.398035 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.398072 kubelet[2702]: W0905 00:26:24.398052 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.398166 kubelet[2702]: E0905 00:26:24.398078 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.398304 kubelet[2702]: E0905 00:26:24.398288 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.398304 kubelet[2702]: W0905 00:26:24.398300 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.398356 kubelet[2702]: E0905 00:26:24.398315 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.398543 kubelet[2702]: E0905 00:26:24.398496 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.398543 kubelet[2702]: W0905 00:26:24.398513 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.398543 kubelet[2702]: E0905 00:26:24.398534 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.398813 kubelet[2702]: E0905 00:26:24.398767 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.398813 kubelet[2702]: W0905 00:26:24.398776 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.398813 kubelet[2702]: E0905 00:26:24.398792 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.399130 kubelet[2702]: E0905 00:26:24.399094 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.399240 kubelet[2702]: W0905 00:26:24.399128 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.399240 kubelet[2702]: E0905 00:26:24.399172 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.399525 kubelet[2702]: E0905 00:26:24.399493 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.399525 kubelet[2702]: W0905 00:26:24.399509 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.399698 kubelet[2702]: E0905 00:26:24.399552 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.399753 kubelet[2702]: E0905 00:26:24.399716 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.399753 kubelet[2702]: W0905 00:26:24.399727 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.399922 kubelet[2702]: E0905 00:26:24.399833 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.400189 kubelet[2702]: E0905 00:26:24.400142 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.400262 kubelet[2702]: W0905 00:26:24.400195 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.400262 kubelet[2702]: E0905 00:26:24.400222 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.400539 kubelet[2702]: E0905 00:26:24.400518 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.400539 kubelet[2702]: W0905 00:26:24.400535 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.400683 kubelet[2702]: E0905 00:26:24.400556 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.400787 kubelet[2702]: E0905 00:26:24.400763 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.400787 kubelet[2702]: W0905 00:26:24.400778 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.400884 kubelet[2702]: E0905 00:26:24.400824 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.401301 kubelet[2702]: E0905 00:26:24.401116 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.401301 kubelet[2702]: W0905 00:26:24.401159 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.401471 kubelet[2702]: E0905 00:26:24.401431 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.401586 kubelet[2702]: E0905 00:26:24.401560 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.401586 kubelet[2702]: W0905 00:26:24.401575 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.401693 kubelet[2702]: E0905 00:26:24.401612 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.401926 kubelet[2702]: E0905 00:26:24.401902 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.401926 kubelet[2702]: W0905 00:26:24.401919 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.401997 kubelet[2702]: E0905 00:26:24.401934 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.402246 kubelet[2702]: E0905 00:26:24.402220 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.402246 kubelet[2702]: W0905 00:26:24.402238 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.402335 kubelet[2702]: E0905 00:26:24.402277 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.402567 kubelet[2702]: E0905 00:26:24.402547 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.402567 kubelet[2702]: W0905 00:26:24.402558 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.402669 kubelet[2702]: E0905 00:26:24.402591 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.402871 kubelet[2702]: E0905 00:26:24.402853 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.402871 kubelet[2702]: W0905 00:26:24.402864 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.403132 kubelet[2702]: E0905 00:26:24.402912 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.403132 kubelet[2702]: E0905 00:26:24.403091 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.403132 kubelet[2702]: W0905 00:26:24.403104 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.403132 kubelet[2702]: E0905 00:26:24.403117 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.403597 kubelet[2702]: E0905 00:26:24.403574 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.403597 kubelet[2702]: W0905 00:26:24.403591 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.403597 kubelet[2702]: E0905 00:26:24.403603 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.409297 kubelet[2702]: E0905 00:26:24.409250 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:24.409297 kubelet[2702]: W0905 00:26:24.409278 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:24.409297 kubelet[2702]: E0905 00:26:24.409301 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:24.423967 containerd[1576]: time="2025-09-05T00:26:24.423904097Z" level=info msg="connecting to shim a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660" address="unix:///run/containerd/s/e4cfc6f72f3024d6e8dcc8f189520fcb9e710834385a0a11c041bee3de81f894" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:24.451331 systemd[1]: Started cri-containerd-a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660.scope - libcontainer container a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660. Sep 5 00:26:24.656452 containerd[1576]: time="2025-09-05T00:26:24.656310280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fr72g,Uid:92667fcc-ce8d-4a59-8ad1-48353195d636,Namespace:calico-system,Attempt:0,} returns sandbox id \"a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660\"" Sep 5 00:26:25.353852 kubelet[2702]: E0905 00:26:25.353787 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:27.162665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364272756.mount: Deactivated successfully. Sep 5 00:26:27.354790 kubelet[2702]: E0905 00:26:27.354716 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:28.942210 containerd[1576]: time="2025-09-05T00:26:28.942154589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:28.954547 containerd[1576]: time="2025-09-05T00:26:28.954450188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:26:28.969948 containerd[1576]: time="2025-09-05T00:26:28.969887611Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:28.974340 containerd[1576]: time="2025-09-05T00:26:28.974302422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:28.974928 containerd[1576]: time="2025-09-05T00:26:28.974897613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.630278213s" Sep 5 00:26:28.974988 containerd[1576]: time="2025-09-05T00:26:28.974930215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:26:28.985795 containerd[1576]: time="2025-09-05T00:26:28.984419620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:26:29.003478 containerd[1576]: time="2025-09-05T00:26:29.003421243Z" level=info msg="CreateContainer within sandbox \"b7d611a69adfb9e334317960856a63b347f2129e39c5e517caf437503e964290\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:26:29.140161 containerd[1576]: time="2025-09-05T00:26:29.140118225Z" level=info msg="Container 7ee2663bc1579cd0f7dc38f69dad6242688db03c849c829eadd05018348889a0: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:29.354410 kubelet[2702]: E0905 00:26:29.354227 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:29.395612 containerd[1576]: time="2025-09-05T00:26:29.395563215Z" level=info msg="CreateContainer within sandbox \"b7d611a69adfb9e334317960856a63b347f2129e39c5e517caf437503e964290\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7ee2663bc1579cd0f7dc38f69dad6242688db03c849c829eadd05018348889a0\"" Sep 5 00:26:29.396316 containerd[1576]: time="2025-09-05T00:26:29.396262642Z" level=info msg="StartContainer for \"7ee2663bc1579cd0f7dc38f69dad6242688db03c849c829eadd05018348889a0\"" Sep 5 00:26:29.397675 containerd[1576]: time="2025-09-05T00:26:29.397643530Z" level=info msg="connecting to shim 7ee2663bc1579cd0f7dc38f69dad6242688db03c849c829eadd05018348889a0" address="unix:///run/containerd/s/3a9c8984fc5786b5d02d6f6cb288ddba114485a725c61f11e1b670d5d2209f0b" protocol=ttrpc version=3 Sep 5 00:26:29.422580 systemd[1]: Started cri-containerd-7ee2663bc1579cd0f7dc38f69dad6242688db03c849c829eadd05018348889a0.scope - libcontainer container 7ee2663bc1579cd0f7dc38f69dad6242688db03c849c829eadd05018348889a0. Sep 5 00:26:29.481769 containerd[1576]: time="2025-09-05T00:26:29.481715321Z" level=info msg="StartContainer for \"7ee2663bc1579cd0f7dc38f69dad6242688db03c849c829eadd05018348889a0\" returns successfully" Sep 5 00:26:29.578021 kubelet[2702]: E0905 00:26:29.577644 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:29.592729 kubelet[2702]: I0905 00:26:29.592652 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55bf4cf76-24k4n" podStartSLOduration=1.952598079 podStartE2EDuration="6.592620218s" podCreationTimestamp="2025-09-05 00:26:23 +0000 UTC" firstStartedPulling="2025-09-05 00:26:24.344113166 +0000 UTC m=+22.100147943" lastFinishedPulling="2025-09-05 00:26:28.984135305 +0000 UTC m=+26.740170082" observedRunningTime="2025-09-05 00:26:29.590911782 +0000 UTC m=+27.346946589" watchObservedRunningTime="2025-09-05 00:26:29.592620218 +0000 UTC m=+27.348654995" Sep 5 00:26:29.608063 kubelet[2702]: E0905 00:26:29.607476 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.608063 kubelet[2702]: W0905 00:26:29.607856 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.613338 kubelet[2702]: E0905 00:26:29.613297 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.614168 kubelet[2702]: E0905 00:26:29.614151 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.614243 kubelet[2702]: W0905 00:26:29.614220 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.614410 kubelet[2702]: E0905 00:26:29.614396 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.614982 kubelet[2702]: E0905 00:26:29.614685 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.614982 kubelet[2702]: W0905 00:26:29.614696 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.614982 kubelet[2702]: E0905 00:26:29.614706 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.616930 kubelet[2702]: E0905 00:26:29.616915 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.617042 kubelet[2702]: W0905 00:26:29.617028 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.617115 kubelet[2702]: E0905 00:26:29.617102 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.617595 kubelet[2702]: E0905 00:26:29.617513 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.617595 kubelet[2702]: W0905 00:26:29.617540 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.617595 kubelet[2702]: E0905 00:26:29.617552 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.617933 kubelet[2702]: E0905 00:26:29.617875 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.617933 kubelet[2702]: W0905 00:26:29.617887 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.617933 kubelet[2702]: E0905 00:26:29.617897 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.618301 kubelet[2702]: E0905 00:26:29.618205 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.618301 kubelet[2702]: W0905 00:26:29.618218 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.618301 kubelet[2702]: E0905 00:26:29.618228 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.618638 kubelet[2702]: E0905 00:26:29.618561 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.618638 kubelet[2702]: W0905 00:26:29.618575 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.618638 kubelet[2702]: E0905 00:26:29.618586 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.619048 kubelet[2702]: E0905 00:26:29.618940 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.619048 kubelet[2702]: W0905 00:26:29.618951 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.619048 kubelet[2702]: E0905 00:26:29.618961 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.619463 kubelet[2702]: E0905 00:26:29.619392 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.619463 kubelet[2702]: W0905 00:26:29.619406 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.619463 kubelet[2702]: E0905 00:26:29.619418 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.619857 kubelet[2702]: E0905 00:26:29.619792 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.619857 kubelet[2702]: W0905 00:26:29.619804 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.619857 kubelet[2702]: E0905 00:26:29.619815 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.620206 kubelet[2702]: E0905 00:26:29.620129 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.620206 kubelet[2702]: W0905 00:26:29.620141 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.620206 kubelet[2702]: E0905 00:26:29.620150 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.620561 kubelet[2702]: E0905 00:26:29.620488 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.620561 kubelet[2702]: W0905 00:26:29.620500 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.620561 kubelet[2702]: E0905 00:26:29.620510 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.620910 kubelet[2702]: E0905 00:26:29.620876 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.621049 kubelet[2702]: W0905 00:26:29.620958 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.621142 kubelet[2702]: E0905 00:26:29.621109 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.621418 kubelet[2702]: E0905 00:26:29.621405 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.621582 kubelet[2702]: W0905 00:26:29.621468 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.621582 kubelet[2702]: E0905 00:26:29.621480 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.635268 kubelet[2702]: E0905 00:26:29.635231 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.635749 kubelet[2702]: W0905 00:26:29.635494 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.635749 kubelet[2702]: E0905 00:26:29.635532 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.636306 kubelet[2702]: E0905 00:26:29.636239 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.636306 kubelet[2702]: W0905 00:26:29.636276 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.636402 kubelet[2702]: E0905 00:26:29.636329 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.637026 kubelet[2702]: E0905 00:26:29.636767 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.637026 kubelet[2702]: W0905 00:26:29.636785 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.637026 kubelet[2702]: E0905 00:26:29.636802 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.637180 kubelet[2702]: E0905 00:26:29.637092 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.637180 kubelet[2702]: W0905 00:26:29.637102 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.637180 kubelet[2702]: E0905 00:26:29.637131 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.637349 kubelet[2702]: E0905 00:26:29.637321 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.637349 kubelet[2702]: W0905 00:26:29.637339 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.637442 kubelet[2702]: E0905 00:26:29.637377 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.637687 kubelet[2702]: E0905 00:26:29.637660 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.637687 kubelet[2702]: W0905 00:26:29.637677 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.637769 kubelet[2702]: E0905 00:26:29.637707 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.638192 kubelet[2702]: E0905 00:26:29.638164 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.638192 kubelet[2702]: W0905 00:26:29.638181 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.638192 kubelet[2702]: E0905 00:26:29.638195 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.638530 kubelet[2702]: E0905 00:26:29.638497 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.638530 kubelet[2702]: W0905 00:26:29.638511 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.639737 kubelet[2702]: E0905 00:26:29.638607 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.639737 kubelet[2702]: E0905 00:26:29.638728 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.639737 kubelet[2702]: W0905 00:26:29.638735 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.639737 kubelet[2702]: E0905 00:26:29.638782 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.639737 kubelet[2702]: E0905 00:26:29.639026 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.639737 kubelet[2702]: W0905 00:26:29.639035 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.639737 kubelet[2702]: E0905 00:26:29.639058 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.640129 kubelet[2702]: E0905 00:26:29.640105 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.640129 kubelet[2702]: W0905 00:26:29.640124 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.640219 kubelet[2702]: E0905 00:26:29.640147 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.640403 kubelet[2702]: E0905 00:26:29.640379 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.640403 kubelet[2702]: W0905 00:26:29.640395 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.640534 kubelet[2702]: E0905 00:26:29.640500 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.640981 kubelet[2702]: E0905 00:26:29.640957 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.640981 kubelet[2702]: W0905 00:26:29.640972 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.640981 kubelet[2702]: E0905 00:26:29.640985 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.641343 kubelet[2702]: E0905 00:26:29.641318 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.641343 kubelet[2702]: W0905 00:26:29.641333 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.641431 kubelet[2702]: E0905 00:26:29.641367 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.641731 kubelet[2702]: E0905 00:26:29.641707 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.641731 kubelet[2702]: W0905 00:26:29.641723 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.641801 kubelet[2702]: E0905 00:26:29.641745 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.642014 kubelet[2702]: E0905 00:26:29.641974 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.642014 kubelet[2702]: W0905 00:26:29.641989 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.642100 kubelet[2702]: E0905 00:26:29.642063 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.642621 kubelet[2702]: E0905 00:26:29.642591 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.642621 kubelet[2702]: W0905 00:26:29.642608 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.642717 kubelet[2702]: E0905 00:26:29.642656 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:29.643109 kubelet[2702]: E0905 00:26:29.642978 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:29.643109 kubelet[2702]: W0905 00:26:29.642991 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:29.643109 kubelet[2702]: E0905 00:26:29.643041 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.580089 kubelet[2702]: I0905 00:26:30.579933 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:26:30.580834 kubelet[2702]: E0905 00:26:30.580432 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:30.628141 kubelet[2702]: E0905 00:26:30.628074 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.628141 kubelet[2702]: W0905 00:26:30.628106 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.628141 kubelet[2702]: E0905 00:26:30.628135 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.628418 kubelet[2702]: E0905 00:26:30.628345 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.628418 kubelet[2702]: W0905 00:26:30.628357 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.628418 kubelet[2702]: E0905 00:26:30.628368 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.628645 kubelet[2702]: E0905 00:26:30.628609 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.628645 kubelet[2702]: W0905 00:26:30.628625 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.628645 kubelet[2702]: E0905 00:26:30.628639 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.628929 kubelet[2702]: E0905 00:26:30.628909 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.628929 kubelet[2702]: W0905 00:26:30.628922 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.629030 kubelet[2702]: E0905 00:26:30.628933 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.629196 kubelet[2702]: E0905 00:26:30.629160 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.629196 kubelet[2702]: W0905 00:26:30.629183 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.629399 kubelet[2702]: E0905 00:26:30.629200 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.629524 kubelet[2702]: E0905 00:26:30.629439 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.629524 kubelet[2702]: W0905 00:26:30.629453 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.629524 kubelet[2702]: E0905 00:26:30.629468 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.629826 kubelet[2702]: E0905 00:26:30.629691 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.629826 kubelet[2702]: W0905 00:26:30.629706 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.629826 kubelet[2702]: E0905 00:26:30.629718 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.630039 kubelet[2702]: E0905 00:26:30.630019 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.630039 kubelet[2702]: W0905 00:26:30.630034 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.630166 kubelet[2702]: E0905 00:26:30.630047 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.630294 kubelet[2702]: E0905 00:26:30.630275 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.630338 kubelet[2702]: W0905 00:26:30.630299 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.630338 kubelet[2702]: E0905 00:26:30.630313 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.630546 kubelet[2702]: E0905 00:26:30.630516 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.630546 kubelet[2702]: W0905 00:26:30.630529 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.630546 kubelet[2702]: E0905 00:26:30.630540 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.630818 kubelet[2702]: E0905 00:26:30.630723 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.630818 kubelet[2702]: W0905 00:26:30.630734 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.630818 kubelet[2702]: E0905 00:26:30.630745 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.630978 kubelet[2702]: E0905 00:26:30.630948 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.630978 kubelet[2702]: W0905 00:26:30.630973 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.631123 kubelet[2702]: E0905 00:26:30.630985 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.631223 kubelet[2702]: E0905 00:26:30.631204 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.631223 kubelet[2702]: W0905 00:26:30.631216 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.631290 kubelet[2702]: E0905 00:26:30.631227 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.631429 kubelet[2702]: E0905 00:26:30.631410 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.631429 kubelet[2702]: W0905 00:26:30.631422 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.631513 kubelet[2702]: E0905 00:26:30.631433 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.631663 kubelet[2702]: E0905 00:26:30.631643 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.631663 kubelet[2702]: W0905 00:26:30.631655 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.631737 kubelet[2702]: E0905 00:26:30.631666 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.646235 kubelet[2702]: E0905 00:26:30.646187 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.646235 kubelet[2702]: W0905 00:26:30.646212 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.646235 kubelet[2702]: E0905 00:26:30.646234 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.646518 kubelet[2702]: E0905 00:26:30.646490 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.646518 kubelet[2702]: W0905 00:26:30.646513 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.646601 kubelet[2702]: E0905 00:26:30.646531 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.646907 kubelet[2702]: E0905 00:26:30.646856 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.646907 kubelet[2702]: W0905 00:26:30.646892 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.647016 kubelet[2702]: E0905 00:26:30.646924 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.647201 kubelet[2702]: E0905 00:26:30.647164 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.647201 kubelet[2702]: W0905 00:26:30.647179 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.647201 kubelet[2702]: E0905 00:26:30.647196 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.647415 kubelet[2702]: E0905 00:26:30.647374 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.647415 kubelet[2702]: W0905 00:26:30.647382 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.647415 kubelet[2702]: E0905 00:26:30.647393 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.647623 kubelet[2702]: E0905 00:26:30.647592 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.647623 kubelet[2702]: W0905 00:26:30.647613 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.647683 kubelet[2702]: E0905 00:26:30.647630 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.647881 kubelet[2702]: E0905 00:26:30.647862 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.647881 kubelet[2702]: W0905 00:26:30.647875 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.647951 kubelet[2702]: E0905 00:26:30.647926 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.648079 kubelet[2702]: E0905 00:26:30.648062 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.648079 kubelet[2702]: W0905 00:26:30.648074 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.648146 kubelet[2702]: E0905 00:26:30.648103 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.648337 kubelet[2702]: E0905 00:26:30.648318 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.648337 kubelet[2702]: W0905 00:26:30.648331 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.648390 kubelet[2702]: E0905 00:26:30.648346 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.648614 kubelet[2702]: E0905 00:26:30.648588 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.648614 kubelet[2702]: W0905 00:26:30.648602 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.648614 kubelet[2702]: E0905 00:26:30.648612 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.648774 kubelet[2702]: E0905 00:26:30.648757 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.648774 kubelet[2702]: W0905 00:26:30.648770 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.648821 kubelet[2702]: E0905 00:26:30.648783 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.649039 kubelet[2702]: E0905 00:26:30.648991 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.649039 kubelet[2702]: W0905 00:26:30.649035 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.649103 kubelet[2702]: E0905 00:26:30.649054 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.649370 kubelet[2702]: E0905 00:26:30.649349 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.649370 kubelet[2702]: W0905 00:26:30.649366 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.649433 kubelet[2702]: E0905 00:26:30.649384 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.649632 kubelet[2702]: E0905 00:26:30.649612 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.649632 kubelet[2702]: W0905 00:26:30.649627 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.649699 kubelet[2702]: E0905 00:26:30.649644 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.649913 kubelet[2702]: E0905 00:26:30.649883 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.649913 kubelet[2702]: W0905 00:26:30.649898 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.649913 kubelet[2702]: E0905 00:26:30.649908 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.650243 kubelet[2702]: E0905 00:26:30.650221 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.650243 kubelet[2702]: W0905 00:26:30.650238 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.650298 kubelet[2702]: E0905 00:26:30.650257 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.650428 kubelet[2702]: E0905 00:26:30.650410 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.650451 kubelet[2702]: W0905 00:26:30.650440 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.650639 kubelet[2702]: E0905 00:26:30.650535 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.650777 kubelet[2702]: E0905 00:26:30.650755 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:26:30.650777 kubelet[2702]: W0905 00:26:30.650771 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:26:30.650834 kubelet[2702]: E0905 00:26:30.650786 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:26:30.805371 containerd[1576]: time="2025-09-05T00:26:30.805312262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:30.806350 containerd[1576]: time="2025-09-05T00:26:30.806108611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:26:30.807346 containerd[1576]: time="2025-09-05T00:26:30.807304221Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:30.809463 containerd[1576]: time="2025-09-05T00:26:30.809417387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:30.810126 containerd[1576]: time="2025-09-05T00:26:30.810101815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.82563173s" Sep 5 00:26:30.810171 containerd[1576]: time="2025-09-05T00:26:30.810128425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:26:30.812125 containerd[1576]: time="2025-09-05T00:26:30.812076963Z" level=info msg="CreateContainer within sandbox \"a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:26:30.822379 containerd[1576]: time="2025-09-05T00:26:30.822309136Z" level=info msg="Container 78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:30.833805 containerd[1576]: time="2025-09-05T00:26:30.833678411Z" level=info msg="CreateContainer within sandbox \"a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd\"" Sep 5 00:26:30.834733 containerd[1576]: time="2025-09-05T00:26:30.834672722Z" level=info msg="StartContainer for \"78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd\"" Sep 5 00:26:30.840567 containerd[1576]: time="2025-09-05T00:26:30.840270867Z" level=info msg="connecting to shim 78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd" address="unix:///run/containerd/s/e4cfc6f72f3024d6e8dcc8f189520fcb9e710834385a0a11c041bee3de81f894" protocol=ttrpc version=3 Sep 5 00:26:30.867436 systemd[1]: Started cri-containerd-78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd.scope - libcontainer container 78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd. Sep 5 00:26:30.927603 systemd[1]: cri-containerd-78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd.scope: Deactivated successfully. Sep 5 00:26:30.928177 systemd[1]: cri-containerd-78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd.scope: Consumed 42ms CPU time, 6.4M memory peak, 4.6M written to disk. Sep 5 00:26:30.929833 containerd[1576]: time="2025-09-05T00:26:30.929755868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd\" id:\"78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd\" pid:3383 exited_at:{seconds:1757031990 nanos:929067793}" Sep 5 00:26:31.354162 kubelet[2702]: E0905 00:26:31.354097 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:31.437030 containerd[1576]: time="2025-09-05T00:26:31.436925467Z" level=info msg="received exit event container_id:\"78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd\" id:\"78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd\" pid:3383 exited_at:{seconds:1757031990 nanos:929067793}" Sep 5 00:26:31.439294 containerd[1576]: time="2025-09-05T00:26:31.439246394Z" level=info msg="StartContainer for \"78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd\" returns successfully" Sep 5 00:26:31.463969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78ecd3ff876cfc8ddef346f6d7577aaec41701cbc5ffd93c653cf80df84192dd-rootfs.mount: Deactivated successfully. Sep 5 00:26:32.588634 containerd[1576]: time="2025-09-05T00:26:32.588521615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:26:33.354412 kubelet[2702]: E0905 00:26:33.354317 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:35.353940 kubelet[2702]: E0905 00:26:35.353851 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:37.354505 kubelet[2702]: E0905 00:26:37.354421 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:38.606401 containerd[1576]: time="2025-09-05T00:26:38.606338756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:38.607298 containerd[1576]: time="2025-09-05T00:26:38.607256761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:26:38.608579 containerd[1576]: time="2025-09-05T00:26:38.608533240Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:38.610948 containerd[1576]: time="2025-09-05T00:26:38.610917631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:38.611515 containerd[1576]: time="2025-09-05T00:26:38.611492291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.022928537s" Sep 5 00:26:38.611574 containerd[1576]: time="2025-09-05T00:26:38.611518179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:26:38.615225 containerd[1576]: time="2025-09-05T00:26:38.615185741Z" level=info msg="CreateContainer within sandbox \"a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:26:38.626096 containerd[1576]: time="2025-09-05T00:26:38.626046030Z" level=info msg="Container ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:38.639421 containerd[1576]: time="2025-09-05T00:26:38.639359038Z" level=info msg="CreateContainer within sandbox \"a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da\"" Sep 5 00:26:38.639926 containerd[1576]: time="2025-09-05T00:26:38.639905354Z" level=info msg="StartContainer for \"ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da\"" Sep 5 00:26:38.641363 containerd[1576]: time="2025-09-05T00:26:38.641322126Z" level=info msg="connecting to shim ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da" address="unix:///run/containerd/s/e4cfc6f72f3024d6e8dcc8f189520fcb9e710834385a0a11c041bee3de81f894" protocol=ttrpc version=3 Sep 5 00:26:38.675180 systemd[1]: Started cri-containerd-ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da.scope - libcontainer container ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da. Sep 5 00:26:38.719543 containerd[1576]: time="2025-09-05T00:26:38.719495669Z" level=info msg="StartContainer for \"ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da\" returns successfully" Sep 5 00:26:39.354713 kubelet[2702]: E0905 00:26:39.354610 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:39.698478 systemd[1]: cri-containerd-ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da.scope: Deactivated successfully. Sep 5 00:26:39.698896 systemd[1]: cri-containerd-ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da.scope: Consumed 656ms CPU time, 181M memory peak, 2.6M read from disk, 171.3M written to disk. Sep 5 00:26:39.699723 containerd[1576]: time="2025-09-05T00:26:39.699676498Z" level=info msg="received exit event container_id:\"ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da\" id:\"ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da\" pid:3443 exited_at:{seconds:1757031999 nanos:699400960}" Sep 5 00:26:39.700151 containerd[1576]: time="2025-09-05T00:26:39.700122175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da\" id:\"ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da\" pid:3443 exited_at:{seconds:1757031999 nanos:699400960}" Sep 5 00:26:39.703857 containerd[1576]: time="2025-09-05T00:26:39.703798162Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:26:39.714954 kubelet[2702]: I0905 00:26:39.714888 2702 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:26:39.729413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccbf2ad667a823f092c9301de72efcf0e0140d5e040322c17877df8f9bf8b2da-rootfs.mount: Deactivated successfully. Sep 5 00:26:39.764103 systemd[1]: Created slice kubepods-besteffort-pod85d720c7_cda3_4d60_8e89_33bf79925430.slice - libcontainer container kubepods-besteffort-pod85d720c7_cda3_4d60_8e89_33bf79925430.slice. Sep 5 00:26:39.776671 systemd[1]: Created slice kubepods-besteffort-podbd06cdff_0c5c_4409_a405_e23b4ac2ed93.slice - libcontainer container kubepods-besteffort-podbd06cdff_0c5c_4409_a405_e23b4ac2ed93.slice. Sep 5 00:26:39.783172 systemd[1]: Created slice kubepods-besteffort-pod8eb0645a_fa2c_4e5e_a73c_a9398ff81c61.slice - libcontainer container kubepods-besteffort-pod8eb0645a_fa2c_4e5e_a73c_a9398ff81c61.slice. Sep 5 00:26:39.789963 systemd[1]: Created slice kubepods-burstable-pod31aada08_735b_4fc3_b902_f399faf5cc8f.slice - libcontainer container kubepods-burstable-pod31aada08_735b_4fc3_b902_f399faf5cc8f.slice. Sep 5 00:26:39.810376 kubelet[2702]: I0905 00:26:39.810285 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br2k2\" (UniqueName: \"kubernetes.io/projected/85d720c7-cda3-4d60-8e89-33bf79925430-kube-api-access-br2k2\") pod \"whisker-65798bbcc6-z68c4\" (UID: \"85d720c7-cda3-4d60-8e89-33bf79925430\") " pod="calico-system/whisker-65798bbcc6-z68c4" Sep 5 00:26:39.810376 kubelet[2702]: I0905 00:26:39.810347 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78cd636-fa7a-4c71-9603-229c7b087321-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-f2lcw\" (UID: \"b78cd636-fa7a-4c71-9603-229c7b087321\") " pod="calico-system/goldmane-54d579b49d-f2lcw" Sep 5 00:26:39.810376 kubelet[2702]: I0905 00:26:39.810366 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc28e930-0dd7-4404-9939-e8102d8fc0f1-calico-apiserver-certs\") pod \"calico-apiserver-dbd9bdd89-kwd5h\" (UID: \"fc28e930-0dd7-4404-9939-e8102d8fc0f1\") " pod="calico-apiserver/calico-apiserver-dbd9bdd89-kwd5h" Sep 5 00:26:39.810376 kubelet[2702]: I0905 00:26:39.810385 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b78cd636-fa7a-4c71-9603-229c7b087321-goldmane-key-pair\") pod \"goldmane-54d579b49d-f2lcw\" (UID: \"b78cd636-fa7a-4c71-9603-229c7b087321\") " pod="calico-system/goldmane-54d579b49d-f2lcw" Sep 5 00:26:39.810376 kubelet[2702]: I0905 00:26:39.810406 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxqpd\" (UniqueName: \"kubernetes.io/projected/ddfce860-366a-4895-a0c1-4011550414eb-kube-api-access-nxqpd\") pod \"coredns-668d6bf9bc-n9xlx\" (UID: \"ddfce860-366a-4895-a0c1-4011550414eb\") " pod="kube-system/coredns-668d6bf9bc-n9xlx" Sep 5 00:26:39.810713 kubelet[2702]: I0905 00:26:39.810423 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbg2\" (UniqueName: \"kubernetes.io/projected/bd06cdff-0c5c-4409-a405-e23b4ac2ed93-kube-api-access-sfbg2\") pod \"calico-apiserver-dbd9bdd89-lcv9t\" (UID: \"bd06cdff-0c5c-4409-a405-e23b4ac2ed93\") " pod="calico-apiserver/calico-apiserver-dbd9bdd89-lcv9t" Sep 5 00:26:39.810713 kubelet[2702]: I0905 00:26:39.810440 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddfce860-366a-4895-a0c1-4011550414eb-config-volume\") pod \"coredns-668d6bf9bc-n9xlx\" (UID: \"ddfce860-366a-4895-a0c1-4011550414eb\") " pod="kube-system/coredns-668d6bf9bc-n9xlx" Sep 5 00:26:39.810713 kubelet[2702]: I0905 00:26:39.810456 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-ca-bundle\") pod \"whisker-65798bbcc6-z68c4\" (UID: \"85d720c7-cda3-4d60-8e89-33bf79925430\") " pod="calico-system/whisker-65798bbcc6-z68c4" Sep 5 00:26:39.810713 kubelet[2702]: I0905 00:26:39.810469 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbvg\" (UniqueName: \"kubernetes.io/projected/b78cd636-fa7a-4c71-9603-229c7b087321-kube-api-access-tjbvg\") pod \"goldmane-54d579b49d-f2lcw\" (UID: \"b78cd636-fa7a-4c71-9603-229c7b087321\") " pod="calico-system/goldmane-54d579b49d-f2lcw" Sep 5 00:26:39.810713 kubelet[2702]: I0905 00:26:39.810486 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-backend-key-pair\") pod \"whisker-65798bbcc6-z68c4\" (UID: \"85d720c7-cda3-4d60-8e89-33bf79925430\") " pod="calico-system/whisker-65798bbcc6-z68c4" Sep 5 00:26:39.810840 kubelet[2702]: I0905 00:26:39.810501 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb0645a-fa2c-4e5e-a73c-a9398ff81c61-tigera-ca-bundle\") pod \"calico-kube-controllers-7f784bbc7f-ggrdb\" (UID: \"8eb0645a-fa2c-4e5e-a73c-a9398ff81c61\") " pod="calico-system/calico-kube-controllers-7f784bbc7f-ggrdb" Sep 5 00:26:39.810840 kubelet[2702]: I0905 00:26:39.810515 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8mfj\" (UniqueName: \"kubernetes.io/projected/8eb0645a-fa2c-4e5e-a73c-a9398ff81c61-kube-api-access-p8mfj\") pod \"calico-kube-controllers-7f784bbc7f-ggrdb\" (UID: \"8eb0645a-fa2c-4e5e-a73c-a9398ff81c61\") " pod="calico-system/calico-kube-controllers-7f784bbc7f-ggrdb" Sep 5 00:26:39.810840 kubelet[2702]: I0905 00:26:39.810575 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31aada08-735b-4fc3-b902-f399faf5cc8f-config-volume\") pod \"coredns-668d6bf9bc-fk4jw\" (UID: \"31aada08-735b-4fc3-b902-f399faf5cc8f\") " pod="kube-system/coredns-668d6bf9bc-fk4jw" Sep 5 00:26:39.810840 kubelet[2702]: I0905 00:26:39.810628 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snjnw\" (UniqueName: \"kubernetes.io/projected/31aada08-735b-4fc3-b902-f399faf5cc8f-kube-api-access-snjnw\") pod \"coredns-668d6bf9bc-fk4jw\" (UID: \"31aada08-735b-4fc3-b902-f399faf5cc8f\") " pod="kube-system/coredns-668d6bf9bc-fk4jw" Sep 5 00:26:39.810840 kubelet[2702]: I0905 00:26:39.810653 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wv4\" (UniqueName: \"kubernetes.io/projected/fc28e930-0dd7-4404-9939-e8102d8fc0f1-kube-api-access-89wv4\") pod \"calico-apiserver-dbd9bdd89-kwd5h\" (UID: \"fc28e930-0dd7-4404-9939-e8102d8fc0f1\") " pod="calico-apiserver/calico-apiserver-dbd9bdd89-kwd5h" Sep 5 00:26:39.810978 kubelet[2702]: I0905 00:26:39.810667 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd06cdff-0c5c-4409-a405-e23b4ac2ed93-calico-apiserver-certs\") pod \"calico-apiserver-dbd9bdd89-lcv9t\" (UID: \"bd06cdff-0c5c-4409-a405-e23b4ac2ed93\") " pod="calico-apiserver/calico-apiserver-dbd9bdd89-lcv9t" Sep 5 00:26:39.810978 kubelet[2702]: I0905 00:26:39.810682 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b78cd636-fa7a-4c71-9603-229c7b087321-config\") pod \"goldmane-54d579b49d-f2lcw\" (UID: \"b78cd636-fa7a-4c71-9603-229c7b087321\") " pod="calico-system/goldmane-54d579b49d-f2lcw" Sep 5 00:26:39.963805 systemd[1]: Created slice kubepods-burstable-podddfce860_366a_4895_a0c1_4011550414eb.slice - libcontainer container kubepods-burstable-podddfce860_366a_4895_a0c1_4011550414eb.slice. Sep 5 00:26:39.972224 systemd[1]: Created slice kubepods-besteffort-podb78cd636_fa7a_4c71_9603_229c7b087321.slice - libcontainer container kubepods-besteffort-podb78cd636_fa7a_4c71_9603_229c7b087321.slice. Sep 5 00:26:39.977275 systemd[1]: Created slice kubepods-besteffort-podfc28e930_0dd7_4404_9939_e8102d8fc0f1.slice - libcontainer container kubepods-besteffort-podfc28e930_0dd7_4404_9939_e8102d8fc0f1.slice. Sep 5 00:26:40.240473 containerd[1576]: time="2025-09-05T00:26:40.240275012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65798bbcc6-z68c4,Uid:85d720c7-cda3-4d60-8e89-33bf79925430,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:40.240892 containerd[1576]: time="2025-09-05T00:26:40.240847167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f784bbc7f-ggrdb,Uid:8eb0645a-fa2c-4e5e-a73c-a9398ff81c61,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:40.242667 containerd[1576]: time="2025-09-05T00:26:40.242621811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-lcv9t,Uid:bd06cdff-0c5c-4409-a405-e23b4ac2ed93,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:26:40.245938 kubelet[2702]: E0905 00:26:40.245894 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:40.246304 containerd[1576]: time="2025-09-05T00:26:40.246244366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fk4jw,Uid:31aada08-735b-4fc3-b902-f399faf5cc8f,Namespace:kube-system,Attempt:0,}" Sep 5 00:26:40.267943 kubelet[2702]: E0905 00:26:40.267900 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:40.268465 containerd[1576]: time="2025-09-05T00:26:40.268421772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n9xlx,Uid:ddfce860-366a-4895-a0c1-4011550414eb,Namespace:kube-system,Attempt:0,}" Sep 5 00:26:40.275706 containerd[1576]: time="2025-09-05T00:26:40.275657495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-f2lcw,Uid:b78cd636-fa7a-4c71-9603-229c7b087321,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:40.280463 containerd[1576]: time="2025-09-05T00:26:40.280379537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-kwd5h,Uid:fc28e930-0dd7-4404-9939-e8102d8fc0f1,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:26:40.854301 containerd[1576]: time="2025-09-05T00:26:40.853713142Z" level=error msg="Failed to destroy network for sandbox \"ad98469c3c428c5bdefbd7c40a453031044a5286f13a53769dd68475f0f2b134\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.860741 systemd[1]: run-netns-cni\x2de18b2296\x2d2acc\x2d89eb\x2d5bc5\x2d0880b9501caf.mount: Deactivated successfully. Sep 5 00:26:40.868377 containerd[1576]: time="2025-09-05T00:26:40.856212388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-kwd5h,Uid:fc28e930-0dd7-4404-9939-e8102d8fc0f1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad98469c3c428c5bdefbd7c40a453031044a5286f13a53769dd68475f0f2b134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.868742 containerd[1576]: time="2025-09-05T00:26:40.857674686Z" level=error msg="Failed to destroy network for sandbox \"7f2a9ec2cdb2a2eac2a7ee9afedb137ef77bdb508adc25fd94f7ddc149e3af48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.869457 kubelet[2702]: E0905 00:26:40.868852 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad98469c3c428c5bdefbd7c40a453031044a5286f13a53769dd68475f0f2b134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.869457 kubelet[2702]: E0905 00:26:40.868949 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad98469c3c428c5bdefbd7c40a453031044a5286f13a53769dd68475f0f2b134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbd9bdd89-kwd5h" Sep 5 00:26:40.869457 kubelet[2702]: E0905 00:26:40.868977 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad98469c3c428c5bdefbd7c40a453031044a5286f13a53769dd68475f0f2b134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbd9bdd89-kwd5h" Sep 5 00:26:40.871262 kubelet[2702]: E0905 00:26:40.869144 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dbd9bdd89-kwd5h_calico-apiserver(fc28e930-0dd7-4404-9939-e8102d8fc0f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dbd9bdd89-kwd5h_calico-apiserver(fc28e930-0dd7-4404-9939-e8102d8fc0f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad98469c3c428c5bdefbd7c40a453031044a5286f13a53769dd68475f0f2b134\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dbd9bdd89-kwd5h" podUID="fc28e930-0dd7-4404-9939-e8102d8fc0f1" Sep 5 00:26:40.871341 containerd[1576]: time="2025-09-05T00:26:40.864083245Z" level=error msg="Failed to destroy network for sandbox \"f3442fc5a2d3306817b6666dc4b6c4b88a1cbe2316b5e1952a2cfde4730337f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.872756 containerd[1576]: time="2025-09-05T00:26:40.872717526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n9xlx,Uid:ddfce860-366a-4895-a0c1-4011550414eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3442fc5a2d3306817b6666dc4b6c4b88a1cbe2316b5e1952a2cfde4730337f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.873041 kubelet[2702]: E0905 00:26:40.872954 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3442fc5a2d3306817b6666dc4b6c4b88a1cbe2316b5e1952a2cfde4730337f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.873157 kubelet[2702]: E0905 00:26:40.873132 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3442fc5a2d3306817b6666dc4b6c4b88a1cbe2316b5e1952a2cfde4730337f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n9xlx" Sep 5 00:26:40.873233 kubelet[2702]: E0905 00:26:40.873160 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3442fc5a2d3306817b6666dc4b6c4b88a1cbe2316b5e1952a2cfde4730337f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n9xlx" Sep 5 00:26:40.873233 kubelet[2702]: E0905 00:26:40.873205 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n9xlx_kube-system(ddfce860-366a-4895-a0c1-4011550414eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n9xlx_kube-system(ddfce860-366a-4895-a0c1-4011550414eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3442fc5a2d3306817b6666dc4b6c4b88a1cbe2316b5e1952a2cfde4730337f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n9xlx" podUID="ddfce860-366a-4895-a0c1-4011550414eb" Sep 5 00:26:40.874019 containerd[1576]: time="2025-09-05T00:26:40.873938291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-lcv9t,Uid:bd06cdff-0c5c-4409-a405-e23b4ac2ed93,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2a9ec2cdb2a2eac2a7ee9afedb137ef77bdb508adc25fd94f7ddc149e3af48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.874408 kubelet[2702]: E0905 00:26:40.874221 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2a9ec2cdb2a2eac2a7ee9afedb137ef77bdb508adc25fd94f7ddc149e3af48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.874408 kubelet[2702]: E0905 00:26:40.874282 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2a9ec2cdb2a2eac2a7ee9afedb137ef77bdb508adc25fd94f7ddc149e3af48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbd9bdd89-lcv9t" Sep 5 00:26:40.874408 kubelet[2702]: E0905 00:26:40.874308 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2a9ec2cdb2a2eac2a7ee9afedb137ef77bdb508adc25fd94f7ddc149e3af48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbd9bdd89-lcv9t" Sep 5 00:26:40.874591 kubelet[2702]: E0905 00:26:40.874355 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dbd9bdd89-lcv9t_calico-apiserver(bd06cdff-0c5c-4409-a405-e23b4ac2ed93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dbd9bdd89-lcv9t_calico-apiserver(bd06cdff-0c5c-4409-a405-e23b4ac2ed93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f2a9ec2cdb2a2eac2a7ee9afedb137ef77bdb508adc25fd94f7ddc149e3af48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dbd9bdd89-lcv9t" podUID="bd06cdff-0c5c-4409-a405-e23b4ac2ed93" Sep 5 00:26:40.877577 systemd[1]: run-netns-cni\x2d259118a0\x2d5920\x2d0abb\x2d7b19\x2da6629c9f5c2e.mount: Deactivated successfully. Sep 5 00:26:40.877901 systemd[1]: run-netns-cni\x2d230f4a74\x2d0587\x2d9919\x2d12ff\x2dfedea48f5e6f.mount: Deactivated successfully. Sep 5 00:26:40.882087 containerd[1576]: time="2025-09-05T00:26:40.882024572Z" level=error msg="Failed to destroy network for sandbox \"0cc435abc5fccc7a3dca2a4cf7699fe386e92860563133c55c82e1f20a00f1f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.886802 containerd[1576]: time="2025-09-05T00:26:40.886676011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fk4jw,Uid:31aada08-735b-4fc3-b902-f399faf5cc8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc435abc5fccc7a3dca2a4cf7699fe386e92860563133c55c82e1f20a00f1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.887013 kubelet[2702]: E0905 00:26:40.886949 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc435abc5fccc7a3dca2a4cf7699fe386e92860563133c55c82e1f20a00f1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.887069 kubelet[2702]: E0905 00:26:40.887042 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc435abc5fccc7a3dca2a4cf7699fe386e92860563133c55c82e1f20a00f1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fk4jw" Sep 5 00:26:40.887069 kubelet[2702]: E0905 00:26:40.887063 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc435abc5fccc7a3dca2a4cf7699fe386e92860563133c55c82e1f20a00f1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fk4jw" Sep 5 00:26:40.887142 kubelet[2702]: E0905 00:26:40.887117 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fk4jw_kube-system(31aada08-735b-4fc3-b902-f399faf5cc8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fk4jw_kube-system(31aada08-735b-4fc3-b902-f399faf5cc8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cc435abc5fccc7a3dca2a4cf7699fe386e92860563133c55c82e1f20a00f1f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fk4jw" podUID="31aada08-735b-4fc3-b902-f399faf5cc8f" Sep 5 00:26:40.888209 systemd[1]: run-netns-cni\x2dcbd3c4f9\x2d8fdb\x2db7a8\x2d199c\x2d7a1a4abedac2.mount: Deactivated successfully. Sep 5 00:26:40.888958 containerd[1576]: time="2025-09-05T00:26:40.888923012Z" level=error msg="Failed to destroy network for sandbox \"91edd571c4f8b785d9e5db5d505bbe898d64df470e376adb77dba243cbc1f7ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.891038 containerd[1576]: time="2025-09-05T00:26:40.890992129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65798bbcc6-z68c4,Uid:85d720c7-cda3-4d60-8e89-33bf79925430,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91edd571c4f8b785d9e5db5d505bbe898d64df470e376adb77dba243cbc1f7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.891496 kubelet[2702]: E0905 00:26:40.891450 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91edd571c4f8b785d9e5db5d505bbe898d64df470e376adb77dba243cbc1f7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.891550 kubelet[2702]: E0905 00:26:40.891505 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91edd571c4f8b785d9e5db5d505bbe898d64df470e376adb77dba243cbc1f7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65798bbcc6-z68c4" Sep 5 00:26:40.891550 kubelet[2702]: E0905 00:26:40.891525 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91edd571c4f8b785d9e5db5d505bbe898d64df470e376adb77dba243cbc1f7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65798bbcc6-z68c4" Sep 5 00:26:40.892094 kubelet[2702]: E0905 00:26:40.892055 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65798bbcc6-z68c4_calico-system(85d720c7-cda3-4d60-8e89-33bf79925430)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65798bbcc6-z68c4_calico-system(85d720c7-cda3-4d60-8e89-33bf79925430)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91edd571c4f8b785d9e5db5d505bbe898d64df470e376adb77dba243cbc1f7ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65798bbcc6-z68c4" podUID="85d720c7-cda3-4d60-8e89-33bf79925430" Sep 5 00:26:40.892862 containerd[1576]: time="2025-09-05T00:26:40.892833018Z" level=error msg="Failed to destroy network for sandbox \"262a4791d789d59a41e393b3165d5d5e99e4a569f76c27d2bcf70b05c6dd9a63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.893157 systemd[1]: run-netns-cni\x2d75003d67\x2d3aac\x2d66da\x2dc75d\x2d421d30b32968.mount: Deactivated successfully. Sep 5 00:26:40.895014 containerd[1576]: time="2025-09-05T00:26:40.894937351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-f2lcw,Uid:b78cd636-fa7a-4c71-9603-229c7b087321,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"262a4791d789d59a41e393b3165d5d5e99e4a569f76c27d2bcf70b05c6dd9a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.895220 kubelet[2702]: E0905 00:26:40.895126 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262a4791d789d59a41e393b3165d5d5e99e4a569f76c27d2bcf70b05c6dd9a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.895220 kubelet[2702]: E0905 00:26:40.895164 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262a4791d789d59a41e393b3165d5d5e99e4a569f76c27d2bcf70b05c6dd9a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-f2lcw" Sep 5 00:26:40.895220 kubelet[2702]: E0905 00:26:40.895182 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262a4791d789d59a41e393b3165d5d5e99e4a569f76c27d2bcf70b05c6dd9a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-f2lcw" Sep 5 00:26:40.895358 kubelet[2702]: E0905 00:26:40.895210 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-f2lcw_calico-system(b78cd636-fa7a-4c71-9603-229c7b087321)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-f2lcw_calico-system(b78cd636-fa7a-4c71-9603-229c7b087321)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"262a4791d789d59a41e393b3165d5d5e99e4a569f76c27d2bcf70b05c6dd9a63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-f2lcw" podUID="b78cd636-fa7a-4c71-9603-229c7b087321" Sep 5 00:26:40.899713 containerd[1576]: time="2025-09-05T00:26:40.899659573Z" level=error msg="Failed to destroy network for sandbox \"76a0b222ad45d1ecfe464d714c18b83268341c7e24585d5a71a69dd217ae8286\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.901019 containerd[1576]: time="2025-09-05T00:26:40.900963072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f784bbc7f-ggrdb,Uid:8eb0645a-fa2c-4e5e-a73c-a9398ff81c61,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a0b222ad45d1ecfe464d714c18b83268341c7e24585d5a71a69dd217ae8286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.901206 kubelet[2702]: E0905 00:26:40.901172 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a0b222ad45d1ecfe464d714c18b83268341c7e24585d5a71a69dd217ae8286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:40.901278 kubelet[2702]: E0905 00:26:40.901219 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a0b222ad45d1ecfe464d714c18b83268341c7e24585d5a71a69dd217ae8286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f784bbc7f-ggrdb" Sep 5 00:26:40.901278 kubelet[2702]: E0905 00:26:40.901236 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a0b222ad45d1ecfe464d714c18b83268341c7e24585d5a71a69dd217ae8286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f784bbc7f-ggrdb" Sep 5 00:26:40.901328 kubelet[2702]: E0905 00:26:40.901277 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f784bbc7f-ggrdb_calico-system(8eb0645a-fa2c-4e5e-a73c-a9398ff81c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f784bbc7f-ggrdb_calico-system(8eb0645a-fa2c-4e5e-a73c-a9398ff81c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76a0b222ad45d1ecfe464d714c18b83268341c7e24585d5a71a69dd217ae8286\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f784bbc7f-ggrdb" podUID="8eb0645a-fa2c-4e5e-a73c-a9398ff81c61" Sep 5 00:26:41.360390 systemd[1]: Created slice kubepods-besteffort-pod4cfb4c65_a79b_4cf5_96ea_45ce0feb9ceb.slice - libcontainer container kubepods-besteffort-pod4cfb4c65_a79b_4cf5_96ea_45ce0feb9ceb.slice. Sep 5 00:26:41.363068 containerd[1576]: time="2025-09-05T00:26:41.362982774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qk2wl,Uid:4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:41.415218 containerd[1576]: time="2025-09-05T00:26:41.415131325Z" level=error msg="Failed to destroy network for sandbox \"b3fbb1064600d2722c11f96d5f98754a5b275889adeb11f581a72fbd9ba77934\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:41.416906 containerd[1576]: time="2025-09-05T00:26:41.416857949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qk2wl,Uid:4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fbb1064600d2722c11f96d5f98754a5b275889adeb11f581a72fbd9ba77934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:41.417202 kubelet[2702]: E0905 00:26:41.417149 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fbb1064600d2722c11f96d5f98754a5b275889adeb11f581a72fbd9ba77934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:26:41.417294 kubelet[2702]: E0905 00:26:41.417242 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fbb1064600d2722c11f96d5f98754a5b275889adeb11f581a72fbd9ba77934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qk2wl" Sep 5 00:26:41.417294 kubelet[2702]: E0905 00:26:41.417268 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3fbb1064600d2722c11f96d5f98754a5b275889adeb11f581a72fbd9ba77934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qk2wl" Sep 5 00:26:41.417361 kubelet[2702]: E0905 00:26:41.417323 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qk2wl_calico-system(4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qk2wl_calico-system(4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3fbb1064600d2722c11f96d5f98754a5b275889adeb11f581a72fbd9ba77934\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qk2wl" podUID="4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb" Sep 5 00:26:41.614693 containerd[1576]: time="2025-09-05T00:26:41.614536972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:26:41.729346 systemd[1]: run-netns-cni\x2df5cdaeeb\x2d0d7c\x2d0011\x2dd24f\x2d8a3815ebe22c.mount: Deactivated successfully. Sep 5 00:26:41.729740 systemd[1]: run-netns-cni\x2dd3abd9a8\x2d3a53\x2d3100\x2da78f\x2d67bc6c6d82db.mount: Deactivated successfully. Sep 5 00:26:47.545103 kubelet[2702]: I0905 00:26:47.544988 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:26:47.545912 kubelet[2702]: E0905 00:26:47.545583 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:47.624980 kubelet[2702]: E0905 00:26:47.624927 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:51.089527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831338387.mount: Deactivated successfully. Sep 5 00:26:51.670566 containerd[1576]: time="2025-09-05T00:26:51.670404995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:51.671829 containerd[1576]: time="2025-09-05T00:26:51.671801617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:26:51.673148 containerd[1576]: time="2025-09-05T00:26:51.673119100Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:51.676239 containerd[1576]: time="2025-09-05T00:26:51.676186858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:51.676734 containerd[1576]: time="2025-09-05T00:26:51.676678951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.062079021s" Sep 5 00:26:51.676816 containerd[1576]: time="2025-09-05T00:26:51.676735448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:26:51.698114 containerd[1576]: time="2025-09-05T00:26:51.698048262Z" level=info msg="CreateContainer within sandbox \"a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:26:51.707303 containerd[1576]: time="2025-09-05T00:26:51.707268999Z" level=info msg="Container d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:51.743121 containerd[1576]: time="2025-09-05T00:26:51.743041865Z" level=info msg="CreateContainer within sandbox \"a06d91a23d448a893ebc4d3cb0c8ea32b51249ea0fc97bfac0f925cd50a98660\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac\"" Sep 5 00:26:51.743886 containerd[1576]: time="2025-09-05T00:26:51.743853519Z" level=info msg="StartContainer for \"d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac\"" Sep 5 00:26:51.745929 containerd[1576]: time="2025-09-05T00:26:51.745884641Z" level=info msg="connecting to shim d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac" address="unix:///run/containerd/s/e4cfc6f72f3024d6e8dcc8f189520fcb9e710834385a0a11c041bee3de81f894" protocol=ttrpc version=3 Sep 5 00:26:51.772468 systemd[1]: Started cri-containerd-d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac.scope - libcontainer container d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac. Sep 5 00:26:51.781742 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:48498.service - OpenSSH per-connection server daemon (10.0.0.1:48498). Sep 5 00:26:51.851665 containerd[1576]: time="2025-09-05T00:26:51.851609208Z" level=info msg="StartContainer for \"d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac\" returns successfully" Sep 5 00:26:51.881062 sshd[3785]: Accepted publickey for core from 10.0.0.1 port 48498 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:26:51.882927 sshd-session[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:26:51.889046 systemd-logind[1551]: New session 8 of user core. Sep 5 00:26:51.899259 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:26:51.930554 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:26:51.930713 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:26:52.109036 sshd[3803]: Connection closed by 10.0.0.1 port 48498 Sep 5 00:26:52.109816 kubelet[2702]: I0905 00:26:52.109771 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br2k2\" (UniqueName: \"kubernetes.io/projected/85d720c7-cda3-4d60-8e89-33bf79925430-kube-api-access-br2k2\") pod \"85d720c7-cda3-4d60-8e89-33bf79925430\" (UID: \"85d720c7-cda3-4d60-8e89-33bf79925430\") " Sep 5 00:26:52.110162 kubelet[2702]: I0905 00:26:52.109851 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-ca-bundle\") pod \"85d720c7-cda3-4d60-8e89-33bf79925430\" (UID: \"85d720c7-cda3-4d60-8e89-33bf79925430\") " Sep 5 00:26:52.110162 kubelet[2702]: I0905 00:26:52.109882 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-backend-key-pair\") pod \"85d720c7-cda3-4d60-8e89-33bf79925430\" (UID: \"85d720c7-cda3-4d60-8e89-33bf79925430\") " Sep 5 00:26:52.111219 sshd-session[3785]: pam_unix(sshd:session): session closed for user core Sep 5 00:26:52.112876 kubelet[2702]: I0905 00:26:52.112830 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "85d720c7-cda3-4d60-8e89-33bf79925430" (UID: "85d720c7-cda3-4d60-8e89-33bf79925430"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:26:52.117755 systemd[1]: var-lib-kubelet-pods-85d720c7\x2dcda3\x2d4d60\x2d8e89\x2d33bf79925430-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:26:52.122339 kubelet[2702]: I0905 00:26:52.122279 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d720c7-cda3-4d60-8e89-33bf79925430-kube-api-access-br2k2" (OuterVolumeSpecName: "kube-api-access-br2k2") pod "85d720c7-cda3-4d60-8e89-33bf79925430" (UID: "85d720c7-cda3-4d60-8e89-33bf79925430"). InnerVolumeSpecName "kube-api-access-br2k2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:26:52.124065 kubelet[2702]: I0905 00:26:52.122969 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "85d720c7-cda3-4d60-8e89-33bf79925430" (UID: "85d720c7-cda3-4d60-8e89-33bf79925430"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:26:52.124385 systemd[1]: var-lib-kubelet-pods-85d720c7\x2dcda3\x2d4d60\x2d8e89\x2d33bf79925430-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr2k2.mount: Deactivated successfully. Sep 5 00:26:52.125626 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:48498.service: Deactivated successfully. Sep 5 00:26:52.130682 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:26:52.135197 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:26:52.137909 systemd-logind[1551]: Removed session 8. Sep 5 00:26:52.210442 kubelet[2702]: I0905 00:26:52.210387 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:26:52.210442 kubelet[2702]: I0905 00:26:52.210453 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-br2k2\" (UniqueName: \"kubernetes.io/projected/85d720c7-cda3-4d60-8e89-33bf79925430-kube-api-access-br2k2\") on node \"localhost\" DevicePath \"\"" Sep 5 00:26:52.210612 kubelet[2702]: I0905 00:26:52.210468 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d720c7-cda3-4d60-8e89-33bf79925430-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:26:52.369297 systemd[1]: Removed slice kubepods-besteffort-pod85d720c7_cda3_4d60_8e89_33bf79925430.slice - libcontainer container kubepods-besteffort-pod85d720c7_cda3_4d60_8e89_33bf79925430.slice. Sep 5 00:26:52.683350 kubelet[2702]: I0905 00:26:52.683127 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fr72g" podStartSLOduration=2.663391666 podStartE2EDuration="29.683105887s" podCreationTimestamp="2025-09-05 00:26:23 +0000 UTC" firstStartedPulling="2025-09-05 00:26:24.657935142 +0000 UTC m=+22.413969919" lastFinishedPulling="2025-09-05 00:26:51.677649363 +0000 UTC m=+49.433684140" observedRunningTime="2025-09-05 00:26:52.671191165 +0000 UTC m=+50.427225972" watchObservedRunningTime="2025-09-05 00:26:52.683105887 +0000 UTC m=+50.439140664" Sep 5 00:26:52.732270 systemd[1]: Created slice kubepods-besteffort-pod3ed6d83d_6ba0_4c12_97ce_ac1ea8d0b941.slice - libcontainer container kubepods-besteffort-pod3ed6d83d_6ba0_4c12_97ce_ac1ea8d0b941.slice. Sep 5 00:26:52.790015 containerd[1576]: time="2025-09-05T00:26:52.789944755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac\" id:\"aecbdeeddbadc89f91a135146a4f66216fb4e388af8e192b0a3b01cf565e983d\" pid:3860 exit_status:1 exited_at:{seconds:1757032012 nanos:789547258}" Sep 5 00:26:52.815572 kubelet[2702]: I0905 00:26:52.815473 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jgc\" (UniqueName: \"kubernetes.io/projected/3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941-kube-api-access-q6jgc\") pod \"whisker-cc6d9b955-2lwjt\" (UID: \"3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941\") " pod="calico-system/whisker-cc6d9b955-2lwjt" Sep 5 00:26:52.815572 kubelet[2702]: I0905 00:26:52.815558 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941-whisker-backend-key-pair\") pod \"whisker-cc6d9b955-2lwjt\" (UID: \"3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941\") " pod="calico-system/whisker-cc6d9b955-2lwjt" Sep 5 00:26:52.815572 kubelet[2702]: I0905 00:26:52.815582 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941-whisker-ca-bundle\") pod \"whisker-cc6d9b955-2lwjt\" (UID: \"3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941\") " pod="calico-system/whisker-cc6d9b955-2lwjt" Sep 5 00:26:53.037103 containerd[1576]: time="2025-09-05T00:26:53.037046569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cc6d9b955-2lwjt,Uid:3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:53.233567 systemd-networkd[1473]: calibb94ca24e7b: Link UP Sep 5 00:26:53.234645 systemd-networkd[1473]: calibb94ca24e7b: Gained carrier Sep 5 00:26:53.252137 containerd[1576]: 2025-09-05 00:26:53.059 [INFO][3876] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:26:53.252137 containerd[1576]: 2025-09-05 00:26:53.080 [INFO][3876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cc6d9b955--2lwjt-eth0 whisker-cc6d9b955- calico-system 3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941 1012 0 2025-09-05 00:26:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cc6d9b955 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cc6d9b955-2lwjt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibb94ca24e7b [] [] }} ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-" Sep 5 00:26:53.252137 containerd[1576]: 2025-09-05 00:26:53.080 [INFO][3876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" Sep 5 00:26:53.252137 containerd[1576]: 2025-09-05 00:26:53.183 [INFO][3889] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" HandleID="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Workload="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.183 [INFO][3889] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" HandleID="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Workload="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000463f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cc6d9b955-2lwjt", "timestamp":"2025-09-05 00:26:53.183110724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.183 [INFO][3889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.183 [INFO][3889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.184 [INFO][3889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.192 [INFO][3889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" host="localhost" Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.197 [INFO][3889] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.201 [INFO][3889] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.204 [INFO][3889] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.206 [INFO][3889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:53.252380 containerd[1576]: 2025-09-05 00:26:53.206 [INFO][3889] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" host="localhost" Sep 5 00:26:53.252610 containerd[1576]: 2025-09-05 00:26:53.208 [INFO][3889] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17 Sep 5 00:26:53.252610 containerd[1576]: 2025-09-05 00:26:53.212 [INFO][3889] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" host="localhost" Sep 5 00:26:53.252610 containerd[1576]: 2025-09-05 00:26:53.220 [INFO][3889] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" host="localhost" Sep 5 00:26:53.252610 containerd[1576]: 2025-09-05 00:26:53.220 [INFO][3889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" host="localhost" Sep 5 00:26:53.252610 containerd[1576]: 2025-09-05 00:26:53.221 [INFO][3889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:53.252610 containerd[1576]: 2025-09-05 00:26:53.221 [INFO][3889] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" HandleID="k8s-pod-network.7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Workload="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" Sep 5 00:26:53.252803 containerd[1576]: 2025-09-05 00:26:53.224 [INFO][3876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cc6d9b955--2lwjt-eth0", GenerateName:"whisker-cc6d9b955-", Namespace:"calico-system", SelfLink:"", UID:"3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cc6d9b955", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cc6d9b955-2lwjt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb94ca24e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:53.252803 containerd[1576]: 2025-09-05 00:26:53.224 [INFO][3876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" Sep 5 00:26:53.252875 containerd[1576]: 2025-09-05 00:26:53.224 [INFO][3876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb94ca24e7b ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" Sep 5 00:26:53.252875 containerd[1576]: 2025-09-05 00:26:53.239 [INFO][3876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" Sep 5 00:26:53.252923 containerd[1576]: 2025-09-05 00:26:53.240 [INFO][3876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cc6d9b955--2lwjt-eth0", GenerateName:"whisker-cc6d9b955-", Namespace:"calico-system", SelfLink:"", UID:"3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cc6d9b955", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17", Pod:"whisker-cc6d9b955-2lwjt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb94ca24e7b", MAC:"ce:0d:9a:2f:b0:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:53.252977 containerd[1576]: 2025-09-05 00:26:53.249 [INFO][3876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" Namespace="calico-system" Pod="whisker-cc6d9b955-2lwjt" WorkloadEndpoint="localhost-k8s-whisker--cc6d9b955--2lwjt-eth0" Sep 5 00:26:53.322350 containerd[1576]: time="2025-09-05T00:26:53.322211230Z" level=info msg="connecting to shim 7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17" address="unix:///run/containerd/s/1cf433ff1d3a0c6a671719bbe3757bb35b94587b292627db73762daefb4af8b3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:53.351171 systemd[1]: Started cri-containerd-7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17.scope - libcontainer container 7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17. Sep 5 00:26:53.355402 kubelet[2702]: E0905 00:26:53.355359 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:53.357770 kubelet[2702]: E0905 00:26:53.357333 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:53.359033 containerd[1576]: time="2025-09-05T00:26:53.358950583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fk4jw,Uid:31aada08-735b-4fc3-b902-f399faf5cc8f,Namespace:kube-system,Attempt:0,}" Sep 5 00:26:53.359627 containerd[1576]: time="2025-09-05T00:26:53.359567762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n9xlx,Uid:ddfce860-366a-4895-a0c1-4011550414eb,Namespace:kube-system,Attempt:0,}" Sep 5 00:26:53.376844 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:53.570212 containerd[1576]: time="2025-09-05T00:26:53.570145627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cc6d9b955-2lwjt,Uid:3ed6d83d-6ba0-4c12-97ce-ac1ea8d0b941,Namespace:calico-system,Attempt:0,} returns sandbox id \"7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17\"" Sep 5 00:26:53.574792 containerd[1576]: time="2025-09-05T00:26:53.574679515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:26:53.590862 systemd-networkd[1473]: cali8ba1a2e5c70: Link UP Sep 5 00:26:53.594034 systemd-networkd[1473]: cali8ba1a2e5c70: Gained carrier Sep 5 00:26:53.622049 containerd[1576]: 2025-09-05 00:26:53.418 [INFO][3960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:26:53.622049 containerd[1576]: 2025-09-05 00:26:53.431 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0 coredns-668d6bf9bc- kube-system ddfce860-366a-4895-a0c1-4011550414eb 893 0 2025-09-05 00:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-n9xlx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ba1a2e5c70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-" Sep 5 00:26:53.622049 containerd[1576]: 2025-09-05 00:26:53.432 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" Sep 5 00:26:53.622049 containerd[1576]: 2025-09-05 00:26:53.524 [INFO][4017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" HandleID="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Workload="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.525 [INFO][4017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" HandleID="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Workload="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-n9xlx", "timestamp":"2025-09-05 00:26:53.524889449 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.525 [INFO][4017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.525 [INFO][4017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.525 [INFO][4017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.535 [INFO][4017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" host="localhost" Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.543 [INFO][4017] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.547 [INFO][4017] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.549 [INFO][4017] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.553 [INFO][4017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:53.622318 containerd[1576]: 2025-09-05 00:26:53.553 [INFO][4017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" host="localhost" Sep 5 00:26:53.622545 containerd[1576]: 2025-09-05 00:26:53.555 [INFO][4017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27 Sep 5 00:26:53.622545 containerd[1576]: 2025-09-05 00:26:53.570 [INFO][4017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" host="localhost" Sep 5 00:26:53.622545 containerd[1576]: 2025-09-05 00:26:53.579 [INFO][4017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" host="localhost" Sep 5 00:26:53.622545 containerd[1576]: 2025-09-05 00:26:53.579 [INFO][4017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" host="localhost" Sep 5 00:26:53.622545 containerd[1576]: 2025-09-05 00:26:53.579 [INFO][4017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:53.622545 containerd[1576]: 2025-09-05 00:26:53.579 [INFO][4017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" HandleID="k8s-pod-network.3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Workload="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" Sep 5 00:26:53.622674 containerd[1576]: 2025-09-05 00:26:53.584 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ddfce860-366a-4895-a0c1-4011550414eb", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-n9xlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ba1a2e5c70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:53.622748 containerd[1576]: 2025-09-05 00:26:53.586 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" Sep 5 00:26:53.622748 containerd[1576]: 2025-09-05 00:26:53.586 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ba1a2e5c70 ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" Sep 5 00:26:53.622748 containerd[1576]: 2025-09-05 00:26:53.593 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" Sep 5 00:26:53.622830 containerd[1576]: 2025-09-05 00:26:53.600 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ddfce860-366a-4895-a0c1-4011550414eb", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27", Pod:"coredns-668d6bf9bc-n9xlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ba1a2e5c70", MAC:"6a:0e:db:63:d4:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:53.622830 containerd[1576]: 2025-09-05 00:26:53.615 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-n9xlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n9xlx-eth0" Sep 5 00:26:53.667601 containerd[1576]: time="2025-09-05T00:26:53.667126281Z" level=info msg="connecting to shim 3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27" address="unix:///run/containerd/s/463a805ee8beca4c318fea5370dd6cf17658d495f3ed5dd8724c40b08156518b" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:53.714689 systemd-networkd[1473]: cali2d64a29be9c: Link UP Sep 5 00:26:53.716718 systemd-networkd[1473]: cali2d64a29be9c: Gained carrier Sep 5 00:26:53.720325 systemd[1]: Started cri-containerd-3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27.scope - libcontainer container 3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27. Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.421 [INFO][3945] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.450 [INFO][3945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0 coredns-668d6bf9bc- kube-system 31aada08-735b-4fc3-b902-f399faf5cc8f 894 0 2025-09-05 00:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-fk4jw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d64a29be9c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.450 [INFO][3945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.543 [INFO][4050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" HandleID="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Workload="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.543 [INFO][4050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" HandleID="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Workload="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005875a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-fk4jw", "timestamp":"2025-09-05 00:26:53.542767944 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.543 [INFO][4050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.579 [INFO][4050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.580 [INFO][4050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.637 [INFO][4050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.644 [INFO][4050] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.655 [INFO][4050] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.660 [INFO][4050] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.663 [INFO][4050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.667 [INFO][4050] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.671 [INFO][4050] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.683 [INFO][4050] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.695 [INFO][4050] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.695 [INFO][4050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" host="localhost" Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.695 [INFO][4050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:53.745091 containerd[1576]: 2025-09-05 00:26:53.695 [INFO][4050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" HandleID="k8s-pod-network.4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Workload="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" Sep 5 00:26:53.745954 containerd[1576]: 2025-09-05 00:26:53.702 [INFO][3945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"31aada08-735b-4fc3-b902-f399faf5cc8f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-fk4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d64a29be9c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:53.745954 containerd[1576]: 2025-09-05 00:26:53.703 [INFO][3945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" Sep 5 00:26:53.745954 containerd[1576]: 2025-09-05 00:26:53.703 [INFO][3945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d64a29be9c ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" Sep 5 00:26:53.745954 containerd[1576]: 2025-09-05 00:26:53.717 [INFO][3945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" Sep 5 00:26:53.745954 containerd[1576]: 2025-09-05 00:26:53.720 [INFO][3945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"31aada08-735b-4fc3-b902-f399faf5cc8f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d", Pod:"coredns-668d6bf9bc-fk4jw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d64a29be9c", MAC:"e6:75:61:2f:c6:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:53.745954 containerd[1576]: 2025-09-05 00:26:53.737 [INFO][3945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fk4jw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fk4jw-eth0" Sep 5 00:26:53.748224 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:53.822884 containerd[1576]: time="2025-09-05T00:26:53.822839696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac\" id:\"e7484accc16b8761d66c5b09fc5b7b6e199c093e8e18315661169e77ddc45c3c\" pid:4144 exit_status:1 exited_at:{seconds:1757032013 nanos:821956208}" Sep 5 00:26:53.871080 containerd[1576]: time="2025-09-05T00:26:53.869611288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n9xlx,Uid:ddfce860-366a-4895-a0c1-4011550414eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27\"" Sep 5 00:26:53.871190 kubelet[2702]: E0905 00:26:53.870609 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:53.874094 containerd[1576]: time="2025-09-05T00:26:53.873949570Z" level=info msg="CreateContainer within sandbox \"3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:26:53.895758 containerd[1576]: time="2025-09-05T00:26:53.895696335Z" level=info msg="Container 7cb359ceab2708a3eba8ffaecf431b5d9ce518b6b74eb33890e80533e0abbd33: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:53.896185 containerd[1576]: time="2025-09-05T00:26:53.896105061Z" level=info msg="connecting to shim 4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d" address="unix:///run/containerd/s/bd09004f4da255b3d4e7bf2b3dd5a9ed3009911781a7fbb10c897047951e0503" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:53.905737 containerd[1576]: time="2025-09-05T00:26:53.905694308Z" level=info msg="CreateContainer within sandbox \"3e97a6ab9ba3e9370c41fe8b22a2fa54c10d9084b3d7de5339ca34c550a6bd27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cb359ceab2708a3eba8ffaecf431b5d9ce518b6b74eb33890e80533e0abbd33\"" Sep 5 00:26:53.906468 containerd[1576]: time="2025-09-05T00:26:53.906440258Z" level=info msg="StartContainer for \"7cb359ceab2708a3eba8ffaecf431b5d9ce518b6b74eb33890e80533e0abbd33\"" Sep 5 00:26:53.907281 containerd[1576]: time="2025-09-05T00:26:53.907240450Z" level=info msg="connecting to shim 7cb359ceab2708a3eba8ffaecf431b5d9ce518b6b74eb33890e80533e0abbd33" address="unix:///run/containerd/s/463a805ee8beca4c318fea5370dd6cf17658d495f3ed5dd8724c40b08156518b" protocol=ttrpc version=3 Sep 5 00:26:53.927189 systemd[1]: Started cri-containerd-4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d.scope - libcontainer container 4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d. Sep 5 00:26:53.930769 systemd[1]: Started cri-containerd-7cb359ceab2708a3eba8ffaecf431b5d9ce518b6b74eb33890e80533e0abbd33.scope - libcontainer container 7cb359ceab2708a3eba8ffaecf431b5d9ce518b6b74eb33890e80533e0abbd33. Sep 5 00:26:53.946213 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:54.108222 systemd-networkd[1473]: vxlan.calico: Link UP Sep 5 00:26:54.108236 systemd-networkd[1473]: vxlan.calico: Gained carrier Sep 5 00:26:54.349639 containerd[1576]: time="2025-09-05T00:26:54.349590567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fk4jw,Uid:31aada08-735b-4fc3-b902-f399faf5cc8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d\"" Sep 5 00:26:54.351325 kubelet[2702]: E0905 00:26:54.350754 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:54.352078 containerd[1576]: time="2025-09-05T00:26:54.352044141Z" level=info msg="StartContainer for \"7cb359ceab2708a3eba8ffaecf431b5d9ce518b6b74eb33890e80533e0abbd33\" returns successfully" Sep 5 00:26:54.355990 containerd[1576]: time="2025-09-05T00:26:54.355342101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-f2lcw,Uid:b78cd636-fa7a-4c71-9603-229c7b087321,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:54.355990 containerd[1576]: time="2025-09-05T00:26:54.355378079Z" level=info msg="CreateContainer within sandbox \"4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:26:54.355990 containerd[1576]: time="2025-09-05T00:26:54.355420298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f784bbc7f-ggrdb,Uid:8eb0645a-fa2c-4e5e-a73c-a9398ff81c61,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:54.355990 containerd[1576]: time="2025-09-05T00:26:54.355538419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-kwd5h,Uid:fc28e930-0dd7-4404-9939-e8102d8fc0f1,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:26:54.359094 kubelet[2702]: I0905 00:26:54.359051 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d720c7-cda3-4d60-8e89-33bf79925430" path="/var/lib/kubelet/pods/85d720c7-cda3-4d60-8e89-33bf79925430/volumes" Sep 5 00:26:54.659190 kubelet[2702]: E0905 00:26:54.658752 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:54.848206 systemd-networkd[1473]: cali8ba1a2e5c70: Gained IPv6LL Sep 5 00:26:54.915301 kubelet[2702]: I0905 00:26:54.914086 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n9xlx" podStartSLOduration=48.91406567 podStartE2EDuration="48.91406567s" podCreationTimestamp="2025-09-05 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:26:54.914030053 +0000 UTC m=+52.670064820" watchObservedRunningTime="2025-09-05 00:26:54.91406567 +0000 UTC m=+52.670100447" Sep 5 00:26:54.976189 systemd-networkd[1473]: calibb94ca24e7b: Gained IPv6LL Sep 5 00:26:55.252134 containerd[1576]: time="2025-09-05T00:26:55.252064548Z" level=info msg="Container cb5981fd78f54c630cdc44d5569af0b45e91f59b70fec6217d00ef1fd14f404d: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:55.280393 containerd[1576]: time="2025-09-05T00:26:55.280315475Z" level=info msg="CreateContainer within sandbox \"4453606d183af8ed965a0057177847cbaf990c1475a300d6f4ab0412cf185d9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb5981fd78f54c630cdc44d5569af0b45e91f59b70fec6217d00ef1fd14f404d\"" Sep 5 00:26:55.281987 containerd[1576]: time="2025-09-05T00:26:55.281940925Z" level=info msg="StartContainer for \"cb5981fd78f54c630cdc44d5569af0b45e91f59b70fec6217d00ef1fd14f404d\"" Sep 5 00:26:55.283240 containerd[1576]: time="2025-09-05T00:26:55.283199346Z" level=info msg="connecting to shim cb5981fd78f54c630cdc44d5569af0b45e91f59b70fec6217d00ef1fd14f404d" address="unix:///run/containerd/s/bd09004f4da255b3d4e7bf2b3dd5a9ed3009911781a7fbb10c897047951e0503" protocol=ttrpc version=3 Sep 5 00:26:55.310841 systemd-networkd[1473]: calib11df125276: Link UP Sep 5 00:26:55.311962 systemd-networkd[1473]: calib11df125276: Gained carrier Sep 5 00:26:55.314251 systemd[1]: Started cri-containerd-cb5981fd78f54c630cdc44d5569af0b45e91f59b70fec6217d00ef1fd14f404d.scope - libcontainer container cb5981fd78f54c630cdc44d5569af0b45e91f59b70fec6217d00ef1fd14f404d. Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.134 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--f2lcw-eth0 goldmane-54d579b49d- calico-system b78cd636-fa7a-4c71-9603-229c7b087321 891 0 2025-09-05 00:26:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-f2lcw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib11df125276 [] [] }} ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.148 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.204 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" HandleID="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Workload="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.204 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" HandleID="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Workload="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-f2lcw", "timestamp":"2025-09-05 00:26:55.204508263 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.204 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.204 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.204 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.246 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.258 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.264 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.266 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.269 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.269 [INFO][4410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.273 [INFO][4410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479 Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.278 [INFO][4410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.291 [INFO][4410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.291 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" host="localhost" Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.291 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:55.334978 containerd[1576]: 2025-09-05 00:26:55.291 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" HandleID="k8s-pod-network.f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Workload="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" Sep 5 00:26:55.335762 containerd[1576]: 2025-09-05 00:26:55.305 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--f2lcw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b78cd636-fa7a-4c71-9603-229c7b087321", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-f2lcw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib11df125276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.335762 containerd[1576]: 2025-09-05 00:26:55.305 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" Sep 5 00:26:55.335762 containerd[1576]: 2025-09-05 00:26:55.305 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib11df125276 ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" Sep 5 00:26:55.335762 containerd[1576]: 2025-09-05 00:26:55.313 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" Sep 5 00:26:55.335762 containerd[1576]: 2025-09-05 00:26:55.313 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--f2lcw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b78cd636-fa7a-4c71-9603-229c7b087321", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479", Pod:"goldmane-54d579b49d-f2lcw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib11df125276", MAC:"1a:54:2c:e1:ca:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.335762 containerd[1576]: 2025-09-05 00:26:55.331 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" Namespace="calico-system" Pod="goldmane-54d579b49d-f2lcw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--f2lcw-eth0" Sep 5 00:26:55.358284 containerd[1576]: time="2025-09-05T00:26:55.358197612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qk2wl,Uid:4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb,Namespace:calico-system,Attempt:0,}" Sep 5 00:26:55.358470 containerd[1576]: time="2025-09-05T00:26:55.358388089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-lcv9t,Uid:bd06cdff-0c5c-4409-a405-e23b4ac2ed93,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:26:55.360204 systemd-networkd[1473]: cali2d64a29be9c: Gained IPv6LL Sep 5 00:26:55.408756 containerd[1576]: time="2025-09-05T00:26:55.408608204Z" level=info msg="StartContainer for \"cb5981fd78f54c630cdc44d5569af0b45e91f59b70fec6217d00ef1fd14f404d\" returns successfully" Sep 5 00:26:55.424765 systemd-networkd[1473]: vxlan.calico: Gained IPv6LL Sep 5 00:26:55.445902 containerd[1576]: time="2025-09-05T00:26:55.445251630Z" level=info msg="connecting to shim f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479" address="unix:///run/containerd/s/7fd5739644ed31be803116d50f942e8c422b6dc2bb3cd5d6781f22a155e2684c" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:55.457719 systemd-networkd[1473]: calida654dd11aa: Link UP Sep 5 00:26:55.459532 systemd-networkd[1473]: calida654dd11aa: Gained carrier Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.178 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0 calico-apiserver-dbd9bdd89- calico-apiserver fc28e930-0dd7-4404-9939-e8102d8fc0f1 892 0 2025-09-05 00:26:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dbd9bdd89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dbd9bdd89-kwd5h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida654dd11aa [] [] }} ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.178 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.241 [INFO][4419] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" HandleID="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Workload="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.242 [INFO][4419] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" HandleID="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Workload="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dbd9bdd89-kwd5h", "timestamp":"2025-09-05 00:26:55.241086075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.242 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.292 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.292 [INFO][4419] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.348 [INFO][4419] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.359 [INFO][4419] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.368 [INFO][4419] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.374 [INFO][4419] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.379 [INFO][4419] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.379 [INFO][4419] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.383 [INFO][4419] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.391 [INFO][4419] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.404 [INFO][4419] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.405 [INFO][4419] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" host="localhost" Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.405 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:55.490479 containerd[1576]: 2025-09-05 00:26:55.405 [INFO][4419] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" HandleID="k8s-pod-network.d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Workload="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" Sep 5 00:26:55.491329 containerd[1576]: 2025-09-05 00:26:55.451 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0", GenerateName:"calico-apiserver-dbd9bdd89-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc28e930-0dd7-4404-9939-e8102d8fc0f1", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbd9bdd89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dbd9bdd89-kwd5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida654dd11aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.491329 containerd[1576]: 2025-09-05 00:26:55.451 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" Sep 5 00:26:55.491329 containerd[1576]: 2025-09-05 00:26:55.451 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida654dd11aa ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" Sep 5 00:26:55.491329 containerd[1576]: 2025-09-05 00:26:55.466 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" Sep 5 00:26:55.491329 containerd[1576]: 2025-09-05 00:26:55.466 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0", GenerateName:"calico-apiserver-dbd9bdd89-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc28e930-0dd7-4404-9939-e8102d8fc0f1", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbd9bdd89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb", Pod:"calico-apiserver-dbd9bdd89-kwd5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida654dd11aa", MAC:"16:89:82:72:47:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.491329 containerd[1576]: 2025-09-05 00:26:55.484 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-kwd5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--kwd5h-eth0" Sep 5 00:26:55.508401 systemd[1]: Started cri-containerd-f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479.scope - libcontainer container f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479. Sep 5 00:26:55.553731 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:55.556403 containerd[1576]: time="2025-09-05T00:26:55.556283256Z" level=info msg="connecting to shim d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb" address="unix:///run/containerd/s/58a8f70eb6ec8919546684115030200def9de2c23f1a74eb99421f264cc3a3b7" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:55.570084 systemd-networkd[1473]: cali4de7132ffee: Link UP Sep 5 00:26:55.572233 systemd-networkd[1473]: cali4de7132ffee: Gained carrier Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.253 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0 calico-kube-controllers-7f784bbc7f- calico-system 8eb0645a-fa2c-4e5e-a73c-a9398ff81c61 888 0 2025-09-05 00:26:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f784bbc7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f784bbc7f-ggrdb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4de7132ffee [] [] }} ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.253 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.324 [INFO][4431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" HandleID="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Workload="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.326 [INFO][4431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" HandleID="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Workload="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f784bbc7f-ggrdb", "timestamp":"2025-09-05 00:26:55.32452527 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.326 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.405 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.406 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.456 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.475 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.489 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.499 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.504 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.504 [INFO][4431] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.506 [INFO][4431] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.512 [INFO][4431] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.534 [INFO][4431] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.534 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" host="localhost" Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.534 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:55.591039 containerd[1576]: 2025-09-05 00:26:55.535 [INFO][4431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" HandleID="k8s-pod-network.504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Workload="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" Sep 5 00:26:55.591637 containerd[1576]: 2025-09-05 00:26:55.563 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0", GenerateName:"calico-kube-controllers-7f784bbc7f-", Namespace:"calico-system", SelfLink:"", UID:"8eb0645a-fa2c-4e5e-a73c-a9398ff81c61", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f784bbc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f784bbc7f-ggrdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4de7132ffee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.591637 containerd[1576]: 2025-09-05 00:26:55.563 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" Sep 5 00:26:55.591637 containerd[1576]: 2025-09-05 00:26:55.563 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4de7132ffee ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" Sep 5 00:26:55.591637 containerd[1576]: 2025-09-05 00:26:55.573 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" Sep 5 00:26:55.591637 containerd[1576]: 2025-09-05 00:26:55.574 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0", GenerateName:"calico-kube-controllers-7f784bbc7f-", Namespace:"calico-system", SelfLink:"", UID:"8eb0645a-fa2c-4e5e-a73c-a9398ff81c61", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f784bbc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc", Pod:"calico-kube-controllers-7f784bbc7f-ggrdb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4de7132ffee", MAC:"da:5e:d7:3c:e8:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.591637 containerd[1576]: 2025-09-05 00:26:55.587 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" Namespace="calico-system" Pod="calico-kube-controllers-7f784bbc7f-ggrdb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f784bbc7f--ggrdb-eth0" Sep 5 00:26:55.605193 systemd[1]: Started cri-containerd-d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb.scope - libcontainer container d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb. Sep 5 00:26:55.611157 containerd[1576]: time="2025-09-05T00:26:55.611094277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-f2lcw,Uid:b78cd636-fa7a-4c71-9603-229c7b087321,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479\"" Sep 5 00:26:55.625718 systemd-networkd[1473]: calif2d2f12f153: Link UP Sep 5 00:26:55.625953 systemd-networkd[1473]: calif2d2f12f153: Gained carrier Sep 5 00:26:55.629738 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:55.641979 containerd[1576]: time="2025-09-05T00:26:55.641898324Z" level=info msg="connecting to shim 504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc" address="unix:///run/containerd/s/cd8ff7af8c85e2ff8fcf73d6ccec9bbc8f32d77f701bdee692912f2d601580d3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.492 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0 calico-apiserver-dbd9bdd89- calico-apiserver bd06cdff-0c5c-4409-a405-e23b4ac2ed93 883 0 2025-09-05 00:26:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dbd9bdd89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dbd9bdd89-lcv9t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif2d2f12f153 [] [] }} ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.493 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.550 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" HandleID="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Workload="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.550 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" HandleID="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Workload="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dbd9bdd89-lcv9t", "timestamp":"2025-09-05 00:26:55.550398032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.550 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.550 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.550 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.564 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.572 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.588 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.593 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.596 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.596 [INFO][4560] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.600 [INFO][4560] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4 Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.607 [INFO][4560] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.616 [INFO][4560] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.617 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" host="localhost" Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.617 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:55.644868 containerd[1576]: 2025-09-05 00:26:55.617 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" HandleID="k8s-pod-network.3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Workload="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" Sep 5 00:26:55.645672 containerd[1576]: 2025-09-05 00:26:55.620 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0", GenerateName:"calico-apiserver-dbd9bdd89-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd06cdff-0c5c-4409-a405-e23b4ac2ed93", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbd9bdd89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dbd9bdd89-lcv9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2d2f12f153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.645672 containerd[1576]: 2025-09-05 00:26:55.620 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" Sep 5 00:26:55.645672 containerd[1576]: 2025-09-05 00:26:55.620 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2d2f12f153 ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" Sep 5 00:26:55.645672 containerd[1576]: 2025-09-05 00:26:55.625 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" Sep 5 00:26:55.645672 containerd[1576]: 2025-09-05 00:26:55.625 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0", GenerateName:"calico-apiserver-dbd9bdd89-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd06cdff-0c5c-4409-a405-e23b4ac2ed93", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbd9bdd89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4", Pod:"calico-apiserver-dbd9bdd89-lcv9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif2d2f12f153", MAC:"8a:bf:34:30:b5:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.645672 containerd[1576]: 2025-09-05 00:26:55.635 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" Namespace="calico-apiserver" Pod="calico-apiserver-dbd9bdd89-lcv9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbd9bdd89--lcv9t-eth0" Sep 5 00:26:55.670222 systemd[1]: Started cri-containerd-504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc.scope - libcontainer container 504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc. Sep 5 00:26:55.676213 kubelet[2702]: E0905 00:26:55.675939 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:55.685082 kubelet[2702]: E0905 00:26:55.685058 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:55.700701 containerd[1576]: time="2025-09-05T00:26:55.699856401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-kwd5h,Uid:fc28e930-0dd7-4404-9939-e8102d8fc0f1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb\"" Sep 5 00:26:55.706445 containerd[1576]: time="2025-09-05T00:26:55.706316404Z" level=info msg="connecting to shim 3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4" address="unix:///run/containerd/s/df4de850a4086aaa315407aa8b02e9e3bfd226a44a2c3f9ac428cc65d6f9e778" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:55.712198 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:55.718709 kubelet[2702]: I0905 00:26:55.717577 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fk4jw" podStartSLOduration=49.717555135 podStartE2EDuration="49.717555135s" podCreationTimestamp="2025-09-05 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:26:55.696929027 +0000 UTC m=+53.452963794" watchObservedRunningTime="2025-09-05 00:26:55.717555135 +0000 UTC m=+53.473589922" Sep 5 00:26:55.757411 systemd-networkd[1473]: cali7f5de8cbf81: Link UP Sep 5 00:26:55.758906 systemd-networkd[1473]: cali7f5de8cbf81: Gained carrier Sep 5 00:26:55.769290 containerd[1576]: time="2025-09-05T00:26:55.768741152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f784bbc7f-ggrdb,Uid:8eb0645a-fa2c-4e5e-a73c-a9398ff81c61,Namespace:calico-system,Attempt:0,} returns sandbox id \"504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc\"" Sep 5 00:26:55.779224 systemd[1]: Started cri-containerd-3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4.scope - libcontainer container 3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4. Sep 5 00:26:55.793770 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.479 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qk2wl-eth0 csi-node-driver- calico-system 4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb 763 0 2025-09-05 00:26:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qk2wl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7f5de8cbf81 [] [] }} ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.479 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-eth0" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.559 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" HandleID="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Workload="localhost-k8s-csi--node--driver--qk2wl-eth0" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.559 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" HandleID="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Workload="localhost-k8s-csi--node--driver--qk2wl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qk2wl", "timestamp":"2025-09-05 00:26:55.559133254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.559 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.617 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.617 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.664 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.677 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.688 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.693 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.703 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.703 [INFO][4558] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.709 [INFO][4558] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.722 [INFO][4558] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.741 [INFO][4558] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.741 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" host="localhost" Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.741 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:26:55.828209 containerd[1576]: 2025-09-05 00:26:55.741 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" HandleID="k8s-pod-network.86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Workload="localhost-k8s-csi--node--driver--qk2wl-eth0" Sep 5 00:26:55.828831 containerd[1576]: 2025-09-05 00:26:55.747 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qk2wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qk2wl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f5de8cbf81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.828831 containerd[1576]: 2025-09-05 00:26:55.747 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-eth0" Sep 5 00:26:55.828831 containerd[1576]: 2025-09-05 00:26:55.747 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f5de8cbf81 ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-eth0" Sep 5 00:26:55.828831 containerd[1576]: 2025-09-05 00:26:55.764 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-eth0" Sep 5 00:26:55.828831 containerd[1576]: 2025-09-05 00:26:55.764 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qk2wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c", Pod:"csi-node-driver-qk2wl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f5de8cbf81", MAC:"d2:69:c0:5a:95:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:26:55.828831 containerd[1576]: 2025-09-05 00:26:55.824 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" Namespace="calico-system" Pod="csi-node-driver-qk2wl" WorkloadEndpoint="localhost-k8s-csi--node--driver--qk2wl-eth0" Sep 5 00:26:55.833475 containerd[1576]: time="2025-09-05T00:26:55.833439468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbd9bdd89-lcv9t,Uid:bd06cdff-0c5c-4409-a405-e23b4ac2ed93,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4\"" Sep 5 00:26:56.127116 containerd[1576]: time="2025-09-05T00:26:56.126891520Z" level=info msg="connecting to shim 86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c" address="unix:///run/containerd/s/d71c1401ee2850b3398ee8fe62e37f9062dc14c817c85b4fd31b7c8df0ae6108" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:26:56.164286 systemd[1]: Started cri-containerd-86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c.scope - libcontainer container 86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c. Sep 5 00:26:56.182017 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:26:56.201576 containerd[1576]: time="2025-09-05T00:26:56.201431830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qk2wl,Uid:4cfb4c65-a79b-4cf5-96ea-45ce0feb9ceb,Namespace:calico-system,Attempt:0,} returns sandbox id \"86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c\"" Sep 5 00:26:56.229807 containerd[1576]: time="2025-09-05T00:26:56.229747055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:56.231074 containerd[1576]: time="2025-09-05T00:26:56.231044219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:26:56.232515 containerd[1576]: time="2025-09-05T00:26:56.232465065Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:56.236039 containerd[1576]: time="2025-09-05T00:26:56.235562087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:26:56.236230 containerd[1576]: time="2025-09-05T00:26:56.236069158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.661354727s" Sep 5 00:26:56.236230 containerd[1576]: time="2025-09-05T00:26:56.236104595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:26:56.239488 containerd[1576]: time="2025-09-05T00:26:56.239404357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:26:56.240118 containerd[1576]: time="2025-09-05T00:26:56.239993302Z" level=info msg="CreateContainer within sandbox \"7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:26:56.252282 containerd[1576]: time="2025-09-05T00:26:56.252222341Z" level=info msg="Container 4392da5aecf9daa08a0c6d3f5abfc51db38ce49af6cf26a0db0c8fe4280bc34b: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:26:56.266150 containerd[1576]: time="2025-09-05T00:26:56.265974658Z" level=info msg="CreateContainer within sandbox \"7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4392da5aecf9daa08a0c6d3f5abfc51db38ce49af6cf26a0db0c8fe4280bc34b\"" Sep 5 00:26:56.267530 containerd[1576]: time="2025-09-05T00:26:56.267478159Z" level=info msg="StartContainer for \"4392da5aecf9daa08a0c6d3f5abfc51db38ce49af6cf26a0db0c8fe4280bc34b\"" Sep 5 00:26:56.269317 containerd[1576]: time="2025-09-05T00:26:56.269275201Z" level=info msg="connecting to shim 4392da5aecf9daa08a0c6d3f5abfc51db38ce49af6cf26a0db0c8fe4280bc34b" address="unix:///run/containerd/s/1cf433ff1d3a0c6a671719bbe3757bb35b94587b292627db73762daefb4af8b3" protocol=ttrpc version=3 Sep 5 00:26:56.315276 systemd[1]: Started cri-containerd-4392da5aecf9daa08a0c6d3f5abfc51db38ce49af6cf26a0db0c8fe4280bc34b.scope - libcontainer container 4392da5aecf9daa08a0c6d3f5abfc51db38ce49af6cf26a0db0c8fe4280bc34b. Sep 5 00:26:56.374548 containerd[1576]: time="2025-09-05T00:26:56.374495203Z" level=info msg="StartContainer for \"4392da5aecf9daa08a0c6d3f5abfc51db38ce49af6cf26a0db0c8fe4280bc34b\" returns successfully" Sep 5 00:26:56.691828 kubelet[2702]: E0905 00:26:56.691780 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:56.692352 kubelet[2702]: E0905 00:26:56.692048 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:26:56.832262 systemd-networkd[1473]: calib11df125276: Gained IPv6LL Sep 5 00:26:56.960223 systemd-networkd[1473]: calida654dd11aa: Gained IPv6LL Sep 5 00:26:57.024259 systemd-networkd[1473]: cali7f5de8cbf81: Gained IPv6LL Sep 5 00:26:57.128476 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:48508.service - OpenSSH per-connection server daemon (10.0.0.1:48508). Sep 5 00:26:57.199551 sshd[4838]: Accepted publickey for core from 10.0.0.1 port 48508 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:26:57.201567 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:26:57.207176 systemd-logind[1551]: New session 9 of user core. Sep 5 00:26:57.214168 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:26:57.217149 systemd-networkd[1473]: calif2d2f12f153: Gained IPv6LL Sep 5 00:26:57.350110 sshd[4841]: Connection closed by 10.0.0.1 port 48508 Sep 5 00:26:57.350494 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Sep 5 00:26:57.355529 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:48508.service: Deactivated successfully. Sep 5 00:26:57.357576 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:26:57.358530 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:26:57.359786 systemd-logind[1551]: Removed session 9. Sep 5 00:26:57.536304 systemd-networkd[1473]: cali4de7132ffee: Gained IPv6LL Sep 5 00:26:59.562609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941317244.mount: Deactivated successfully. Sep 5 00:27:00.068133 containerd[1576]: time="2025-09-05T00:27:00.067906850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:00.069278 containerd[1576]: time="2025-09-05T00:27:00.069216086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:27:00.070852 containerd[1576]: time="2025-09-05T00:27:00.070811711Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:00.073433 containerd[1576]: time="2025-09-05T00:27:00.073368738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:00.073891 containerd[1576]: time="2025-09-05T00:27:00.073864217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.834415637s" Sep 5 00:27:00.073960 containerd[1576]: time="2025-09-05T00:27:00.073892741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:27:00.075072 containerd[1576]: time="2025-09-05T00:27:00.074992624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:27:00.075988 containerd[1576]: time="2025-09-05T00:27:00.075949309Z" level=info msg="CreateContainer within sandbox \"f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:27:00.087448 containerd[1576]: time="2025-09-05T00:27:00.087383724Z" level=info msg="Container 8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:27:00.096755 containerd[1576]: time="2025-09-05T00:27:00.096692871Z" level=info msg="CreateContainer within sandbox \"f2d21e560bbf0800ae3a9d9700d1a0dd4834466828da4a6f732b4d54b23f2479\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82\"" Sep 5 00:27:00.097301 containerd[1576]: time="2025-09-05T00:27:00.097253973Z" level=info msg="StartContainer for \"8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82\"" Sep 5 00:27:00.098771 containerd[1576]: time="2025-09-05T00:27:00.098733219Z" level=info msg="connecting to shim 8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82" address="unix:///run/containerd/s/7fd5739644ed31be803116d50f942e8c422b6dc2bb3cd5d6781f22a155e2684c" protocol=ttrpc version=3 Sep 5 00:27:00.126392 systemd[1]: Started cri-containerd-8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82.scope - libcontainer container 8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82. Sep 5 00:27:00.208731 containerd[1576]: time="2025-09-05T00:27:00.208665653Z" level=info msg="StartContainer for \"8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82\" returns successfully" Sep 5 00:27:00.724691 kubelet[2702]: I0905 00:27:00.724201 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-f2lcw" podStartSLOduration=33.262586043 podStartE2EDuration="37.724173785s" podCreationTimestamp="2025-09-05 00:26:23 +0000 UTC" firstStartedPulling="2025-09-05 00:26:55.613249 +0000 UTC m=+53.369283777" lastFinishedPulling="2025-09-05 00:27:00.074836742 +0000 UTC m=+57.830871519" observedRunningTime="2025-09-05 00:27:00.72262067 +0000 UTC m=+58.478655447" watchObservedRunningTime="2025-09-05 00:27:00.724173785 +0000 UTC m=+58.480208562" Sep 5 00:27:00.806300 containerd[1576]: time="2025-09-05T00:27:00.806236143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82\" id:\"1b154342480b433189207a8f65a83a29595934abd7534a9b9d418d163338c585\" pid:4918 exited_at:{seconds:1757032020 nanos:805587682}" Sep 5 00:27:02.364868 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:46294.service - OpenSSH per-connection server daemon (10.0.0.1:46294). Sep 5 00:27:02.444954 sshd[4940]: Accepted publickey for core from 10.0.0.1 port 46294 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:02.447396 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:02.452859 systemd-logind[1551]: New session 10 of user core. Sep 5 00:27:02.467183 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:27:02.633159 sshd[4943]: Connection closed by 10.0.0.1 port 46294 Sep 5 00:27:02.634205 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:02.640854 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:46294.service: Deactivated successfully. Sep 5 00:27:02.643402 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:27:02.644423 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:27:02.646642 systemd-logind[1551]: Removed session 10. Sep 5 00:27:03.862166 containerd[1576]: time="2025-09-05T00:27:03.862082165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:03.863571 containerd[1576]: time="2025-09-05T00:27:03.863499142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:27:03.865040 containerd[1576]: time="2025-09-05T00:27:03.864953210Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:03.868220 containerd[1576]: time="2025-09-05T00:27:03.868150527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:03.868964 containerd[1576]: time="2025-09-05T00:27:03.868908998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.793833246s" Sep 5 00:27:03.868964 containerd[1576]: time="2025-09-05T00:27:03.868948424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:27:03.871788 containerd[1576]: time="2025-09-05T00:27:03.871754593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:27:03.877783 containerd[1576]: time="2025-09-05T00:27:03.877737299Z" level=info msg="CreateContainer within sandbox \"d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:27:03.888979 containerd[1576]: time="2025-09-05T00:27:03.888920124Z" level=info msg="Container 7b30f84259c8e7431d0fe2f900881726a7e3f7ee52574bb81182018572dc5e16: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:27:03.900602 containerd[1576]: time="2025-09-05T00:27:03.900541058Z" level=info msg="CreateContainer within sandbox \"d4d4544064e0ac098eb7dd4e256e34fded3ae5b0066b8f6a52229179c45e24eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b30f84259c8e7431d0fe2f900881726a7e3f7ee52574bb81182018572dc5e16\"" Sep 5 00:27:03.901123 containerd[1576]: time="2025-09-05T00:27:03.901092047Z" level=info msg="StartContainer for \"7b30f84259c8e7431d0fe2f900881726a7e3f7ee52574bb81182018572dc5e16\"" Sep 5 00:27:03.902214 containerd[1576]: time="2025-09-05T00:27:03.902191808Z" level=info msg="connecting to shim 7b30f84259c8e7431d0fe2f900881726a7e3f7ee52574bb81182018572dc5e16" address="unix:///run/containerd/s/58a8f70eb6ec8919546684115030200def9de2c23f1a74eb99421f264cc3a3b7" protocol=ttrpc version=3 Sep 5 00:27:03.961165 systemd[1]: Started cri-containerd-7b30f84259c8e7431d0fe2f900881726a7e3f7ee52574bb81182018572dc5e16.scope - libcontainer container 7b30f84259c8e7431d0fe2f900881726a7e3f7ee52574bb81182018572dc5e16. Sep 5 00:27:04.123768 containerd[1576]: time="2025-09-05T00:27:04.123569983Z" level=info msg="StartContainer for \"7b30f84259c8e7431d0fe2f900881726a7e3f7ee52574bb81182018572dc5e16\" returns successfully" Sep 5 00:27:04.887097 kubelet[2702]: I0905 00:27:04.887029 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dbd9bdd89-kwd5h" podStartSLOduration=35.724639789 podStartE2EDuration="43.886995666s" podCreationTimestamp="2025-09-05 00:26:21 +0000 UTC" firstStartedPulling="2025-09-05 00:26:55.709109926 +0000 UTC m=+53.465144693" lastFinishedPulling="2025-09-05 00:27:03.871465793 +0000 UTC m=+61.627500570" observedRunningTime="2025-09-05 00:27:04.886389512 +0000 UTC m=+62.642424309" watchObservedRunningTime="2025-09-05 00:27:04.886995666 +0000 UTC m=+62.643030443" Sep 5 00:27:07.654889 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:46308.service - OpenSSH per-connection server daemon (10.0.0.1:46308). Sep 5 00:27:07.866460 sshd[5012]: Accepted publickey for core from 10.0.0.1 port 46308 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:07.874354 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:07.881097 systemd-logind[1551]: New session 11 of user core. Sep 5 00:27:07.889186 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:27:08.064306 sshd[5015]: Connection closed by 10.0.0.1 port 46308 Sep 5 00:27:08.064977 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:08.080603 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:46308.service: Deactivated successfully. Sep 5 00:27:08.084680 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:27:08.086837 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:27:08.091610 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:46318.service - OpenSSH per-connection server daemon (10.0.0.1:46318). Sep 5 00:27:08.093186 systemd-logind[1551]: Removed session 11. Sep 5 00:27:08.161797 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 46318 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:08.163777 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:08.169421 systemd-logind[1551]: New session 12 of user core. Sep 5 00:27:08.180271 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:27:09.354859 kubelet[2702]: E0905 00:27:09.354786 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:27:09.798221 sshd[5036]: Connection closed by 10.0.0.1 port 46318 Sep 5 00:27:09.815538 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:46328.service - OpenSSH per-connection server daemon (10.0.0.1:46328). Sep 5 00:27:09.825070 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:09.831180 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:46318.service: Deactivated successfully. Sep 5 00:27:09.834632 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:27:09.838220 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:27:09.840604 systemd-logind[1551]: Removed session 12. Sep 5 00:27:09.877621 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 46328 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:09.879818 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:09.886745 systemd-logind[1551]: New session 13 of user core. Sep 5 00:27:09.894178 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:27:10.362180 sshd[5057]: Connection closed by 10.0.0.1 port 46328 Sep 5 00:27:10.363097 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:10.367092 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:46328.service: Deactivated successfully. Sep 5 00:27:10.370177 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:27:10.372171 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:27:10.373935 systemd-logind[1551]: Removed session 13. Sep 5 00:27:12.722843 containerd[1576]: time="2025-09-05T00:27:12.722749975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:12.808886 containerd[1576]: time="2025-09-05T00:27:12.808720122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:27:12.839988 containerd[1576]: time="2025-09-05T00:27:12.839913142Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:12.910489 containerd[1576]: time="2025-09-05T00:27:12.910398949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:12.911226 containerd[1576]: time="2025-09-05T00:27:12.911179872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 9.039274476s" Sep 5 00:27:12.911226 containerd[1576]: time="2025-09-05T00:27:12.911222134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:27:12.912826 containerd[1576]: time="2025-09-05T00:27:12.912768570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:27:12.929162 containerd[1576]: time="2025-09-05T00:27:12.929115969Z" level=info msg="CreateContainer within sandbox \"504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:27:13.182716 containerd[1576]: time="2025-09-05T00:27:13.182574176Z" level=info msg="Container a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:27:13.386481 containerd[1576]: time="2025-09-05T00:27:13.386410035Z" level=info msg="CreateContainer within sandbox \"504b0b2747c79e68468ea8190c58573b7e4c78dd22dc9a136c0e2d10491badfc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf\"" Sep 5 00:27:13.387572 containerd[1576]: time="2025-09-05T00:27:13.387498167Z" level=info msg="StartContainer for \"a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf\"" Sep 5 00:27:13.389133 containerd[1576]: time="2025-09-05T00:27:13.389102843Z" level=info msg="connecting to shim a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf" address="unix:///run/containerd/s/cd8ff7af8c85e2ff8fcf73d6ccec9bbc8f32d77f701bdee692912f2d601580d3" protocol=ttrpc version=3 Sep 5 00:27:13.422329 systemd[1]: Started cri-containerd-a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf.scope - libcontainer container a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf. Sep 5 00:27:13.742847 containerd[1576]: time="2025-09-05T00:27:13.742781655Z" level=info msg="StartContainer for \"a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf\" returns successfully" Sep 5 00:27:13.876480 kubelet[2702]: I0905 00:27:13.876380 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f784bbc7f-ggrdb" podStartSLOduration=32.738527023 podStartE2EDuration="49.876354667s" podCreationTimestamp="2025-09-05 00:26:24 +0000 UTC" firstStartedPulling="2025-09-05 00:26:55.774538862 +0000 UTC m=+53.530573639" lastFinishedPulling="2025-09-05 00:27:12.912366506 +0000 UTC m=+70.668401283" observedRunningTime="2025-09-05 00:27:13.87311183 +0000 UTC m=+71.629146628" watchObservedRunningTime="2025-09-05 00:27:13.876354667 +0000 UTC m=+71.632389444" Sep 5 00:27:13.886330 containerd[1576]: time="2025-09-05T00:27:13.886237824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf\" id:\"71b3bf99955aef52a9153266d4900dfbf91836ac793acbf74e31b67337e24776\" pid:5129 exited_at:{seconds:1757032033 nanos:885756679}" Sep 5 00:27:13.892502 containerd[1576]: time="2025-09-05T00:27:13.892436966Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:13.894893 containerd[1576]: time="2025-09-05T00:27:13.894222149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:27:13.896553 containerd[1576]: time="2025-09-05T00:27:13.896520930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 983.721171ms" Sep 5 00:27:13.896553 containerd[1576]: time="2025-09-05T00:27:13.896553984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:27:13.898740 containerd[1576]: time="2025-09-05T00:27:13.898679902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:27:13.900031 containerd[1576]: time="2025-09-05T00:27:13.899971425Z" level=info msg="CreateContainer within sandbox \"3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:27:13.915263 containerd[1576]: time="2025-09-05T00:27:13.915135830Z" level=info msg="Container e561c566730a1a06178c800951bb9d171c0e71d94b33e0acbd088f5f6c673788: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:27:13.928322 containerd[1576]: time="2025-09-05T00:27:13.928271622Z" level=info msg="CreateContainer within sandbox \"3682aa3669056cf656d0ea5da78c9d7760b601a4d579b5d9725eac4f5a75bbf4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e561c566730a1a06178c800951bb9d171c0e71d94b33e0acbd088f5f6c673788\"" Sep 5 00:27:13.929229 containerd[1576]: time="2025-09-05T00:27:13.929200478Z" level=info msg="StartContainer for \"e561c566730a1a06178c800951bb9d171c0e71d94b33e0acbd088f5f6c673788\"" Sep 5 00:27:13.930504 containerd[1576]: time="2025-09-05T00:27:13.930474899Z" level=info msg="connecting to shim e561c566730a1a06178c800951bb9d171c0e71d94b33e0acbd088f5f6c673788" address="unix:///run/containerd/s/df4de850a4086aaa315407aa8b02e9e3bfd226a44a2c3f9ac428cc65d6f9e778" protocol=ttrpc version=3 Sep 5 00:27:13.966265 systemd[1]: Started cri-containerd-e561c566730a1a06178c800951bb9d171c0e71d94b33e0acbd088f5f6c673788.scope - libcontainer container e561c566730a1a06178c800951bb9d171c0e71d94b33e0acbd088f5f6c673788. Sep 5 00:27:14.170831 containerd[1576]: time="2025-09-05T00:27:14.170317611Z" level=info msg="StartContainer for \"e561c566730a1a06178c800951bb9d171c0e71d94b33e0acbd088f5f6c673788\" returns successfully" Sep 5 00:27:14.886558 kubelet[2702]: I0905 00:27:14.886432 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dbd9bdd89-lcv9t" podStartSLOduration=35.82347177 podStartE2EDuration="53.886413654s" podCreationTimestamp="2025-09-05 00:26:21 +0000 UTC" firstStartedPulling="2025-09-05 00:26:55.834726382 +0000 UTC m=+53.590761159" lastFinishedPulling="2025-09-05 00:27:13.897668266 +0000 UTC m=+71.653703043" observedRunningTime="2025-09-05 00:27:14.886246643 +0000 UTC m=+72.642281420" watchObservedRunningTime="2025-09-05 00:27:14.886413654 +0000 UTC m=+72.642448431" Sep 5 00:27:15.167526 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:42842.service - OpenSSH per-connection server daemon (10.0.0.1:42842). Sep 5 00:27:15.259935 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 42842 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:15.324073 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:15.330239 systemd-logind[1551]: New session 14 of user core. Sep 5 00:27:15.336182 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:27:15.533560 sshd[5189]: Connection closed by 10.0.0.1 port 42842 Sep 5 00:27:15.533899 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:15.538363 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:42842.service: Deactivated successfully. Sep 5 00:27:15.540730 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:27:15.541518 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:27:15.542704 systemd-logind[1551]: Removed session 14. Sep 5 00:27:17.857877 containerd[1576]: time="2025-09-05T00:27:17.857776913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:17.889554 containerd[1576]: time="2025-09-05T00:27:17.889482975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:27:17.940153 containerd[1576]: time="2025-09-05T00:27:17.940046965Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:17.960071 containerd[1576]: time="2025-09-05T00:27:17.959970615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:17.960792 containerd[1576]: time="2025-09-05T00:27:17.960734120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 4.062007327s" Sep 5 00:27:17.960792 containerd[1576]: time="2025-09-05T00:27:17.960764097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:27:17.962034 containerd[1576]: time="2025-09-05T00:27:17.961954210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:27:17.971760 containerd[1576]: time="2025-09-05T00:27:17.971714663Z" level=info msg="CreateContainer within sandbox \"86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:27:18.240359 containerd[1576]: time="2025-09-05T00:27:18.240298041Z" level=info msg="Container 01f371746042b9ac7ff9b47148a318f7ce00d8072c5efdd0b73c796593f713c2: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:27:19.037325 containerd[1576]: time="2025-09-05T00:27:19.037223840Z" level=info msg="CreateContainer within sandbox \"86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"01f371746042b9ac7ff9b47148a318f7ce00d8072c5efdd0b73c796593f713c2\"" Sep 5 00:27:19.038288 containerd[1576]: time="2025-09-05T00:27:19.038253122Z" level=info msg="StartContainer for \"01f371746042b9ac7ff9b47148a318f7ce00d8072c5efdd0b73c796593f713c2\"" Sep 5 00:27:19.040394 containerd[1576]: time="2025-09-05T00:27:19.040358316Z" level=info msg="connecting to shim 01f371746042b9ac7ff9b47148a318f7ce00d8072c5efdd0b73c796593f713c2" address="unix:///run/containerd/s/d71c1401ee2850b3398ee8fe62e37f9062dc14c817c85b4fd31b7c8df0ae6108" protocol=ttrpc version=3 Sep 5 00:27:19.063627 systemd[1]: Started cri-containerd-01f371746042b9ac7ff9b47148a318f7ce00d8072c5efdd0b73c796593f713c2.scope - libcontainer container 01f371746042b9ac7ff9b47148a318f7ce00d8072c5efdd0b73c796593f713c2. Sep 5 00:27:19.354753 kubelet[2702]: E0905 00:27:19.354614 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:27:19.504229 containerd[1576]: time="2025-09-05T00:27:19.504171993Z" level=info msg="StartContainer for \"01f371746042b9ac7ff9b47148a318f7ce00d8072c5efdd0b73c796593f713c2\" returns successfully" Sep 5 00:27:20.549714 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:37044.service - OpenSSH per-connection server daemon (10.0.0.1:37044). Sep 5 00:27:20.650835 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 37044 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:20.652524 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:20.657094 systemd-logind[1551]: New session 15 of user core. Sep 5 00:27:20.665195 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:27:21.123375 sshd[5245]: Connection closed by 10.0.0.1 port 37044 Sep 5 00:27:21.123743 sshd-session[5242]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:21.129496 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:37044.service: Deactivated successfully. Sep 5 00:27:21.132363 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:27:21.133341 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:27:21.134623 systemd-logind[1551]: Removed session 15. Sep 5 00:27:22.763962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94978960.mount: Deactivated successfully. Sep 5 00:27:22.788802 containerd[1576]: time="2025-09-05T00:27:22.788744575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:22.789585 containerd[1576]: time="2025-09-05T00:27:22.789534476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:27:22.791090 containerd[1576]: time="2025-09-05T00:27:22.791065796Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:22.793249 containerd[1576]: time="2025-09-05T00:27:22.793220988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:22.793801 containerd[1576]: time="2025-09-05T00:27:22.793769117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.831783917s" Sep 5 00:27:22.793857 containerd[1576]: time="2025-09-05T00:27:22.793804405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:27:22.794819 containerd[1576]: time="2025-09-05T00:27:22.794796002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:27:22.795804 containerd[1576]: time="2025-09-05T00:27:22.795775025Z" level=info msg="CreateContainer within sandbox \"7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:27:22.804779 containerd[1576]: time="2025-09-05T00:27:22.804733564Z" level=info msg="Container 83eab7923520afb36759cb24eda1916d1141d7b04b7ec1edf4d64171e238f428: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:27:22.813694 containerd[1576]: time="2025-09-05T00:27:22.813641055Z" level=info msg="CreateContainer within sandbox \"7503a73d33d0d11375e02d6d102e2294d493692995bf26781fc888c2feac6a17\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"83eab7923520afb36759cb24eda1916d1141d7b04b7ec1edf4d64171e238f428\"" Sep 5 00:27:22.814408 containerd[1576]: time="2025-09-05T00:27:22.814376762Z" level=info msg="StartContainer for \"83eab7923520afb36759cb24eda1916d1141d7b04b7ec1edf4d64171e238f428\"" Sep 5 00:27:22.815665 containerd[1576]: time="2025-09-05T00:27:22.815633797Z" level=info msg="connecting to shim 83eab7923520afb36759cb24eda1916d1141d7b04b7ec1edf4d64171e238f428" address="unix:///run/containerd/s/1cf433ff1d3a0c6a671719bbe3757bb35b94587b292627db73762daefb4af8b3" protocol=ttrpc version=3 Sep 5 00:27:22.837193 systemd[1]: Started cri-containerd-83eab7923520afb36759cb24eda1916d1141d7b04b7ec1edf4d64171e238f428.scope - libcontainer container 83eab7923520afb36759cb24eda1916d1141d7b04b7ec1edf4d64171e238f428. Sep 5 00:27:22.891937 containerd[1576]: time="2025-09-05T00:27:22.891876748Z" level=info msg="StartContainer for \"83eab7923520afb36759cb24eda1916d1141d7b04b7ec1edf4d64171e238f428\" returns successfully" Sep 5 00:27:24.110043 containerd[1576]: time="2025-09-05T00:27:24.109954802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac\" id:\"6e23816e844b420bf01d6f59646b6bdfb2d1bd8ecded4ff785a4a9adf1708474\" pid:5315 exit_status:1 exited_at:{seconds:1757032044 nanos:108887553}" Sep 5 00:27:24.114400 kubelet[2702]: I0905 00:27:24.113798 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cc6d9b955-2lwjt" podStartSLOduration=2.893576059 podStartE2EDuration="32.113776196s" podCreationTimestamp="2025-09-05 00:26:52 +0000 UTC" firstStartedPulling="2025-09-05 00:26:53.574415861 +0000 UTC m=+51.330450638" lastFinishedPulling="2025-09-05 00:27:22.794615997 +0000 UTC m=+80.550650775" observedRunningTime="2025-09-05 00:27:24.111312397 +0000 UTC m=+81.867347174" watchObservedRunningTime="2025-09-05 00:27:24.113776196 +0000 UTC m=+81.869810973" Sep 5 00:27:25.858610 containerd[1576]: time="2025-09-05T00:27:25.858535391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:25.859633 containerd[1576]: time="2025-09-05T00:27:25.859598492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:27:25.862403 containerd[1576]: time="2025-09-05T00:27:25.862354187Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:25.866559 containerd[1576]: time="2025-09-05T00:27:25.866515106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:27:25.867623 containerd[1576]: time="2025-09-05T00:27:25.867571313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.072696521s" Sep 5 00:27:25.867747 containerd[1576]: time="2025-09-05T00:27:25.867628142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:27:25.871673 containerd[1576]: time="2025-09-05T00:27:25.871595120Z" level=info msg="CreateContainer within sandbox \"86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:27:25.891164 containerd[1576]: time="2025-09-05T00:27:25.891111573Z" level=info msg="Container 521c396d9d8d41ce650a8727642351dbbce75d114743b0852ff971a7138473e0: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:27:25.913434 containerd[1576]: time="2025-09-05T00:27:25.913333496Z" level=info msg="CreateContainer within sandbox \"86baf2631608abbfe28832c221e12547aaeb2cfc5c2a6ea9253c235daf44999c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"521c396d9d8d41ce650a8727642351dbbce75d114743b0852ff971a7138473e0\"" Sep 5 00:27:25.914358 containerd[1576]: time="2025-09-05T00:27:25.914166587Z" level=info msg="StartContainer for \"521c396d9d8d41ce650a8727642351dbbce75d114743b0852ff971a7138473e0\"" Sep 5 00:27:25.944637 containerd[1576]: time="2025-09-05T00:27:25.944545251Z" level=info msg="connecting to shim 521c396d9d8d41ce650a8727642351dbbce75d114743b0852ff971a7138473e0" address="unix:///run/containerd/s/d71c1401ee2850b3398ee8fe62e37f9062dc14c817c85b4fd31b7c8df0ae6108" protocol=ttrpc version=3 Sep 5 00:27:25.985392 systemd[1]: Started cri-containerd-521c396d9d8d41ce650a8727642351dbbce75d114743b0852ff971a7138473e0.scope - libcontainer container 521c396d9d8d41ce650a8727642351dbbce75d114743b0852ff971a7138473e0. Sep 5 00:27:26.141919 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:37046.service - OpenSSH per-connection server daemon (10.0.0.1:37046). Sep 5 00:27:26.279204 containerd[1576]: time="2025-09-05T00:27:26.279127382Z" level=info msg="StartContainer for \"521c396d9d8d41ce650a8727642351dbbce75d114743b0852ff971a7138473e0\" returns successfully" Sep 5 00:27:26.334813 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 37046 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:26.336945 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:26.344824 systemd-logind[1551]: New session 16 of user core. Sep 5 00:27:26.359244 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:27:26.461627 kubelet[2702]: I0905 00:27:26.461581 2702 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:27:26.462254 kubelet[2702]: I0905 00:27:26.461659 2702 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:27:26.632872 sshd[5374]: Connection closed by 10.0.0.1 port 37046 Sep 5 00:27:26.633477 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:26.642428 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:37046.service: Deactivated successfully. Sep 5 00:27:26.645728 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:27:26.646855 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:27:26.649991 systemd-logind[1551]: Removed session 16. Sep 5 00:27:26.889186 kubelet[2702]: I0905 00:27:26.888965 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qk2wl" podStartSLOduration=33.223934175 podStartE2EDuration="1m2.888843132s" podCreationTimestamp="2025-09-05 00:26:24 +0000 UTC" firstStartedPulling="2025-09-05 00:26:56.203948002 +0000 UTC m=+53.959982779" lastFinishedPulling="2025-09-05 00:27:25.868856959 +0000 UTC m=+83.624891736" observedRunningTime="2025-09-05 00:27:26.888136713 +0000 UTC m=+84.644171490" watchObservedRunningTime="2025-09-05 00:27:26.888843132 +0000 UTC m=+84.644877909" Sep 5 00:27:30.911162 containerd[1576]: time="2025-09-05T00:27:30.911108340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82\" id:\"6ff7886e49f84320a51ffd54271d10b61ca9eac8f7fd8b0f1ca5ede4b6f1eb78\" pid:5426 exited_at:{seconds:1757032050 nanos:893223135}" Sep 5 00:27:30.911162 containerd[1576]: time="2025-09-05T00:27:30.911171049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8333b788e09a57288ba4fb08fd658f786e3324c2e9df48cf36d5826e0e745f82\" id:\"585278bf8f316289abe3ea8f5b2eb03132a237d442314468eba585d56d326161\" pid:5401 exited_at:{seconds:1757032050 nanos:893642124}" Sep 5 00:27:31.648067 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:34344.service - OpenSSH per-connection server daemon (10.0.0.1:34344). Sep 5 00:27:31.735610 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 34344 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:31.737456 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:31.742159 systemd-logind[1551]: New session 17 of user core. Sep 5 00:27:31.753313 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:27:31.910986 sshd[5442]: Connection closed by 10.0.0.1 port 34344 Sep 5 00:27:31.911494 sshd-session[5439]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:31.920968 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:34344.service: Deactivated successfully. Sep 5 00:27:31.923074 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:27:31.923944 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:27:31.927312 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:34346.service - OpenSSH per-connection server daemon (10.0.0.1:34346). Sep 5 00:27:31.928106 systemd-logind[1551]: Removed session 17. Sep 5 00:27:31.982141 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 34346 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:31.983904 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:31.988508 systemd-logind[1551]: New session 18 of user core. Sep 5 00:27:31.998176 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:27:33.043806 sshd[5458]: Connection closed by 10.0.0.1 port 34346 Sep 5 00:27:33.044142 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:33.053981 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:34346.service: Deactivated successfully. Sep 5 00:27:33.056189 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:27:33.057074 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:27:33.060963 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:34362.service - OpenSSH per-connection server daemon (10.0.0.1:34362). Sep 5 00:27:33.061695 systemd-logind[1551]: Removed session 18. Sep 5 00:27:33.139511 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 34362 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:33.141853 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:33.147498 systemd-logind[1551]: New session 19 of user core. Sep 5 00:27:33.156193 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:27:33.355280 kubelet[2702]: E0905 00:27:33.355126 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:27:33.724191 sshd[5473]: Connection closed by 10.0.0.1 port 34362 Sep 5 00:27:33.725088 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:33.739859 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:34370.service - OpenSSH per-connection server daemon (10.0.0.1:34370). Sep 5 00:27:33.740463 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:34362.service: Deactivated successfully. Sep 5 00:27:33.743985 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:27:33.746400 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:27:33.751064 systemd-logind[1551]: Removed session 19. Sep 5 00:27:33.811228 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 34370 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:33.813410 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:33.818819 systemd-logind[1551]: New session 20 of user core. Sep 5 00:27:33.825282 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:27:34.133902 sshd[5502]: Connection closed by 10.0.0.1 port 34370 Sep 5 00:27:34.134314 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:34.146569 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:34370.service: Deactivated successfully. Sep 5 00:27:34.149562 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:27:34.151049 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:27:34.157308 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:34380.service - OpenSSH per-connection server daemon (10.0.0.1:34380). Sep 5 00:27:34.159157 systemd-logind[1551]: Removed session 20. Sep 5 00:27:34.226889 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 34380 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:34.228995 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:34.233756 systemd-logind[1551]: New session 21 of user core. Sep 5 00:27:34.248314 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:27:34.354830 kubelet[2702]: E0905 00:27:34.354769 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:27:34.376612 sshd[5517]: Connection closed by 10.0.0.1 port 34380 Sep 5 00:27:34.377069 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:34.383103 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:34380.service: Deactivated successfully. Sep 5 00:27:34.385848 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:27:34.387185 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:27:34.388749 systemd-logind[1551]: Removed session 21. Sep 5 00:27:36.627286 containerd[1576]: time="2025-09-05T00:27:36.627228837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf\" id:\"0b1acf6799670a47be3d73fcd7ccf4d8fded1a76e948377bf9143510398bca32\" pid:5550 exited_at:{seconds:1757032056 nanos:626818507}" Sep 5 00:27:39.389271 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:34384.service - OpenSSH per-connection server daemon (10.0.0.1:34384). Sep 5 00:27:39.449330 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 34384 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:39.450901 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:39.455644 systemd-logind[1551]: New session 22 of user core. Sep 5 00:27:39.461153 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:27:39.578322 sshd[5564]: Connection closed by 10.0.0.1 port 34384 Sep 5 00:27:39.578705 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:39.583150 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:34384.service: Deactivated successfully. Sep 5 00:27:39.585333 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:27:39.586182 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:27:39.587799 systemd-logind[1551]: Removed session 22. Sep 5 00:27:43.892217 containerd[1576]: time="2025-09-05T00:27:43.892091238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1a6df97a3ab342d187af65f37b402e1596eb3af1f40bed2b3ca1a5f85e2faaf\" id:\"ff22fc8807c1c45c29eac32a97d00cc106e87c04dcd1eb4130e170eaa83f7ae4\" pid:5592 exited_at:{seconds:1757032063 nanos:890651976}" Sep 5 00:27:44.597612 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:40240.service - OpenSSH per-connection server daemon (10.0.0.1:40240). Sep 5 00:27:44.667899 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 40240 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:44.669804 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:44.674607 systemd-logind[1551]: New session 23 of user core. Sep 5 00:27:44.681159 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:27:44.801452 sshd[5607]: Connection closed by 10.0.0.1 port 40240 Sep 5 00:27:44.801874 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:44.807085 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:40240.service: Deactivated successfully. Sep 5 00:27:44.809539 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:27:44.810568 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:27:44.812163 systemd-logind[1551]: Removed session 23. Sep 5 00:27:49.818658 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:40246.service - OpenSSH per-connection server daemon (10.0.0.1:40246). Sep 5 00:27:49.884644 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 40246 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:49.887144 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:49.892808 systemd-logind[1551]: New session 24 of user core. Sep 5 00:27:49.900209 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:27:50.021264 sshd[5623]: Connection closed by 10.0.0.1 port 40246 Sep 5 00:27:50.021636 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:50.026212 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:40246.service: Deactivated successfully. Sep 5 00:27:50.028473 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:27:50.029428 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:27:50.030794 systemd-logind[1551]: Removed session 24. Sep 5 00:27:53.759687 containerd[1576]: time="2025-09-05T00:27:53.759627029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2f3fefb34074de20091c9dc54e607ea407a38f864d711a89e78464c5a7052ac\" id:\"cb9c6f3cf4d1890499c25be51eb835e0fe31021484df9dbff7278f8392eff2f7\" pid:5647 exited_at:{seconds:1757032073 nanos:759189150}" Sep 5 00:27:55.038372 systemd[1]: Started sshd@24-10.0.0.14:22-10.0.0.1:41504.service - OpenSSH per-connection server daemon (10.0.0.1:41504). Sep 5 00:27:55.172525 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 41504 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:27:55.174730 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:55.180980 systemd-logind[1551]: New session 25 of user core. Sep 5 00:27:55.192435 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:27:55.336014 sshd[5664]: Connection closed by 10.0.0.1 port 41504 Sep 5 00:27:55.336831 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:55.340637 systemd[1]: sshd@24-10.0.0.14:22-10.0.0.1:41504.service: Deactivated successfully. Sep 5 00:27:55.342953 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:27:55.346630 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:27:55.347633 systemd-logind[1551]: Removed session 25.