Sep 11 00:25:08.952891 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:15:45 -00 2025 Sep 11 00:25:08.952940 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:25:08.952956 kernel: BIOS-provided physical RAM map: Sep 11 00:25:08.952965 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:25:08.952973 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:25:08.952982 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:25:08.952992 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:25:08.953001 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:25:08.953010 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:25:08.953021 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:25:08.953030 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 11 00:25:08.953039 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:25:08.953048 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:25:08.953057 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:25:08.953068 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:25:08.953093 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:25:08.953113 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:25:08.953123 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:25:08.953132 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:25:08.953142 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:25:08.953151 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:25:08.953160 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:25:08.953169 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:25:08.953184 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:25:08.953194 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:25:08.953208 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:25:08.953217 kernel: NX (Execute Disable) protection: active Sep 11 00:25:08.953226 kernel: APIC: Static calls initialized Sep 11 00:25:08.953236 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 11 00:25:08.953245 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 11 00:25:08.953254 kernel: extended physical RAM map: Sep 11 00:25:08.953264 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:25:08.953273 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:25:08.953283 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:25:08.953293 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:25:08.953302 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:25:08.953315 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:25:08.953324 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:25:08.953334 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 11 00:25:08.953343 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 11 00:25:08.953358 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 11 00:25:08.953367 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 11 00:25:08.953380 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 11 00:25:08.953390 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:25:08.953400 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:25:08.953410 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:25:08.953420 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:25:08.953430 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:25:08.953440 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:25:08.953450 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:25:08.953460 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:25:08.953470 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:25:08.953483 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:25:08.953493 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:25:08.953520 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:25:08.953530 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:25:08.953540 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:25:08.953550 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:25:08.953560 kernel: efi: EFI v2.7 by EDK II Sep 11 00:25:08.953569 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 11 00:25:08.953579 kernel: random: crng init done Sep 11 00:25:08.953589 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 11 00:25:08.953599 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 11 00:25:08.953613 kernel: secureboot: Secure boot disabled Sep 11 00:25:08.953622 kernel: SMBIOS 2.8 present. Sep 11 00:25:08.953632 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 11 00:25:08.953642 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:25:08.953651 kernel: Hypervisor detected: KVM Sep 11 00:25:08.953661 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:25:08.953671 kernel: kvm-clock: using sched offset of 4891882722 cycles Sep 11 00:25:08.953682 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:25:08.953692 kernel: tsc: Detected 2794.748 MHz processor Sep 11 00:25:08.953702 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:25:08.953713 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:25:08.953726 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 11 00:25:08.953736 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:25:08.953747 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:25:08.953757 kernel: Using GB pages for direct mapping Sep 11 00:25:08.953767 kernel: ACPI: Early table checksum verification disabled Sep 11 00:25:08.953777 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 11 00:25:08.953787 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:25:08.953797 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:25:08.953808 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:25:08.953821 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 11 00:25:08.953830 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:25:08.953839 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:25:08.953847 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:25:08.953856 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:25:08.953865 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 11 00:25:08.953874 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 11 00:25:08.953883 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 11 00:25:08.953895 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 11 00:25:08.953904 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 11 00:25:08.953923 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 11 00:25:08.953932 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 11 00:25:08.953941 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 11 00:25:08.953950 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 11 00:25:08.953959 kernel: No NUMA configuration found Sep 11 00:25:08.953968 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 11 00:25:08.953977 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 11 00:25:08.953991 kernel: Zone ranges: Sep 11 00:25:08.954001 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:25:08.954011 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 11 00:25:08.954021 kernel: Normal empty Sep 11 00:25:08.954031 kernel: Device empty Sep 11 00:25:08.954041 kernel: Movable zone start for each node Sep 11 00:25:08.954051 kernel: Early memory node ranges Sep 11 00:25:08.954061 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 11 00:25:08.954071 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 11 00:25:08.954081 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 11 00:25:08.954094 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 11 00:25:08.954104 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 11 00:25:08.954114 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 11 00:25:08.954124 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 11 00:25:08.954134 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 11 00:25:08.954144 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 11 00:25:08.954154 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:25:08.954164 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 11 00:25:08.954187 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 11 00:25:08.954198 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:25:08.954208 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 11 00:25:08.954219 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 11 00:25:08.954232 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 11 00:25:08.954242 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 11 00:25:08.954253 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 11 00:25:08.954271 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:25:08.954280 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:25:08.954293 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:25:08.954302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:25:08.954311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:25:08.954321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:25:08.954330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:25:08.954339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:25:08.954348 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:25:08.954358 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:25:08.954368 kernel: TSC deadline timer available Sep 11 00:25:08.954379 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:25:08.954389 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:25:08.954398 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:25:08.954408 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:25:08.954418 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:25:08.954429 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:25:08.954438 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:25:08.954448 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:25:08.954457 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:25:08.954469 kernel: kvm-guest: setup PV sched yield Sep 11 00:25:08.954480 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 11 00:25:08.954490 kernel: Booting paravirtualized kernel on KVM Sep 11 00:25:08.954528 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:25:08.954539 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:25:08.954550 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:25:08.954560 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:25:08.954571 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:25:08.954581 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:25:08.954595 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:25:08.954607 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:25:08.954618 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:25:08.954628 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:25:08.954639 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:25:08.954649 kernel: Fallback order for Node 0: 0 Sep 11 00:25:08.954660 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 11 00:25:08.954670 kernel: Policy zone: DMA32 Sep 11 00:25:08.954684 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:25:08.954694 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:25:08.954705 kernel: ftrace: allocating 40106 entries in 157 pages Sep 11 00:25:08.954715 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:25:08.954726 kernel: Dynamic Preempt: voluntary Sep 11 00:25:08.954737 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:25:08.954748 kernel: rcu: RCU event tracing is enabled. Sep 11 00:25:08.954759 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:25:08.954770 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:25:08.954783 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:25:08.954794 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:25:08.954804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:25:08.954815 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:25:08.954825 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:25:08.954836 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:25:08.954847 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:25:08.954857 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:25:08.954868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:25:08.954878 kernel: Console: colour dummy device 80x25 Sep 11 00:25:08.954892 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:25:08.954903 kernel: ACPI: Core revision 20240827 Sep 11 00:25:08.954923 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:25:08.954934 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:25:08.954945 kernel: x2apic enabled Sep 11 00:25:08.954956 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:25:08.954966 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:25:08.954977 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:25:08.954987 kernel: kvm-guest: setup PV IPIs Sep 11 00:25:08.955002 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:25:08.955013 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:25:08.955023 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 11 00:25:08.955034 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:25:08.955045 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:25:08.955055 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:25:08.955066 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:25:08.955076 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:25:08.955090 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:25:08.955101 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:25:08.955111 kernel: active return thunk: retbleed_return_thunk Sep 11 00:25:08.955122 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:25:08.955132 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:25:08.955143 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:25:08.955154 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:25:08.955165 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:25:08.955176 kernel: active return thunk: srso_return_thunk Sep 11 00:25:08.955190 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:25:08.955201 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:25:08.955211 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:25:08.955222 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:25:08.955233 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:25:08.955243 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:25:08.955254 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:25:08.955264 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:25:08.955275 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:25:08.955288 kernel: landlock: Up and running. Sep 11 00:25:08.955299 kernel: SELinux: Initializing. Sep 11 00:25:08.955309 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:25:08.955320 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:25:08.955331 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:25:08.955342 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:25:08.955352 kernel: ... version: 0 Sep 11 00:25:08.955363 kernel: ... bit width: 48 Sep 11 00:25:08.955373 kernel: ... generic registers: 6 Sep 11 00:25:08.955387 kernel: ... value mask: 0000ffffffffffff Sep 11 00:25:08.955397 kernel: ... max period: 00007fffffffffff Sep 11 00:25:08.955408 kernel: ... fixed-purpose events: 0 Sep 11 00:25:08.955419 kernel: ... event mask: 000000000000003f Sep 11 00:25:08.955429 kernel: signal: max sigframe size: 1776 Sep 11 00:25:08.955439 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:25:08.955450 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:25:08.955461 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:25:08.955472 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:25:08.955486 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:25:08.955496 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:25:08.955523 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:25:08.955535 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 11 00:25:08.955550 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2429K rwdata, 9960K rodata, 54036K init, 2932K bss, 137196K reserved, 0K cma-reserved) Sep 11 00:25:08.955564 kernel: devtmpfs: initialized Sep 11 00:25:08.955578 kernel: x86/mm: Memory block size: 128MB Sep 11 00:25:08.955592 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 11 00:25:08.955607 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 11 00:25:08.955628 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 11 00:25:08.955638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 11 00:25:08.955649 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 11 00:25:08.955659 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 11 00:25:08.955670 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:25:08.955680 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:25:08.955691 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:25:08.955701 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:25:08.955715 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:25:08.955726 kernel: audit: type=2000 audit(1757550305.461:1): state=initialized audit_enabled=0 res=1 Sep 11 00:25:08.955736 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:25:08.955747 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:25:08.955757 kernel: cpuidle: using governor menu Sep 11 00:25:08.955768 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:25:08.955777 kernel: dca service started, version 1.12.1 Sep 11 00:25:08.955788 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 11 00:25:08.955798 kernel: PCI: Using configuration type 1 for base access Sep 11 00:25:08.955808 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:25:08.955821 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:25:08.955831 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:25:08.955841 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:25:08.955851 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:25:08.955861 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:25:08.955871 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:25:08.955881 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:25:08.955891 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:25:08.955901 kernel: ACPI: Interpreter enabled Sep 11 00:25:08.955924 kernel: ACPI: PM: (supports S0 S3 S5) Sep 11 00:25:08.955936 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:25:08.955947 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:25:08.955957 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:25:08.955967 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:25:08.955978 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:25:08.956304 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:25:08.956451 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:25:08.956609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:25:08.956623 kernel: PCI host bridge to bus 0000:00 Sep 11 00:25:08.956770 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:25:08.956895 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:25:08.957030 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:25:08.957153 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 11 00:25:08.957281 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 11 00:25:08.957405 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:25:08.957583 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:25:08.957754 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:25:08.957905 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:25:08.958056 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 11 00:25:08.958191 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 11 00:25:08.958332 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 11 00:25:08.958467 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:25:08.958638 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:25:08.958782 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 11 00:25:08.958928 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 11 00:25:08.959067 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 11 00:25:08.959219 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:25:08.959362 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 11 00:25:08.959498 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 11 00:25:08.959654 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 11 00:25:08.959892 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:25:08.960043 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 11 00:25:08.960179 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 11 00:25:08.960319 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 11 00:25:08.960453 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 11 00:25:08.960620 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:25:08.960756 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:25:08.960905 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:25:08.961054 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 11 00:25:08.961189 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 11 00:25:08.961340 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:25:08.961486 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 11 00:25:08.961500 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:25:08.961529 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:25:08.961539 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:25:08.961549 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:25:08.961559 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:25:08.961569 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:25:08.961583 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:25:08.961593 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:25:08.961604 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:25:08.961614 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:25:08.961624 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:25:08.961634 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:25:08.961644 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:25:08.961654 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:25:08.961664 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:25:08.961677 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:25:08.961687 kernel: iommu: Default domain type: Translated Sep 11 00:25:08.961698 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:25:08.961708 kernel: efivars: Registered efivars operations Sep 11 00:25:08.961718 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:25:08.961728 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:25:08.961738 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 11 00:25:08.961748 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 11 00:25:08.961758 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 11 00:25:08.961770 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 11 00:25:08.961780 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 11 00:25:08.961790 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 11 00:25:08.961800 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 11 00:25:08.961810 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 11 00:25:08.962004 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:25:08.962150 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:25:08.962288 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:25:08.962305 kernel: vgaarb: loaded Sep 11 00:25:08.962316 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:25:08.962326 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:25:08.962336 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:25:08.962346 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:25:08.962356 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:25:08.962367 kernel: pnp: PnP ACPI init Sep 11 00:25:08.962553 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 11 00:25:08.962575 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:25:08.962585 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:25:08.962596 kernel: NET: Registered PF_INET protocol family Sep 11 00:25:08.962606 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:25:08.962617 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:25:08.962628 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:25:08.962642 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:25:08.962653 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:25:08.962664 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:25:08.962677 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:25:08.962688 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:25:08.962699 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:25:08.962710 kernel: NET: Registered PF_XDP protocol family Sep 11 00:25:08.962863 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 11 00:25:08.963018 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 11 00:25:08.963143 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:25:08.963260 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:25:08.963380 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:25:08.963497 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 11 00:25:08.963639 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 11 00:25:08.963755 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:25:08.963768 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:25:08.963778 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:25:08.963788 kernel: Initialise system trusted keyrings Sep 11 00:25:08.963802 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:25:08.963811 kernel: Key type asymmetric registered Sep 11 00:25:08.963821 kernel: Asymmetric key parser 'x509' registered Sep 11 00:25:08.963831 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:25:08.963841 kernel: io scheduler mq-deadline registered Sep 11 00:25:08.963850 kernel: io scheduler kyber registered Sep 11 00:25:08.963860 kernel: io scheduler bfq registered Sep 11 00:25:08.963872 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:25:08.963883 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:25:08.963893 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:25:08.963903 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:25:08.963922 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:25:08.963932 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:25:08.963942 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:25:08.963953 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:25:08.963963 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:25:08.964108 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:25:08.964123 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:25:08.964243 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:25:08.964363 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:25:08 UTC (1757550308) Sep 11 00:25:08.964483 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:25:08.964497 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:25:08.964544 kernel: efifb: probing for efifb Sep 11 00:25:08.964554 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 11 00:25:08.964568 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 11 00:25:08.964578 kernel: efifb: scrolling: redraw Sep 11 00:25:08.964588 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:25:08.964598 kernel: Console: switching to colour frame buffer device 160x50 Sep 11 00:25:08.964608 kernel: fb0: EFI VGA frame buffer device Sep 11 00:25:08.964619 kernel: pstore: Using crash dump compression: deflate Sep 11 00:25:08.964630 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:25:08.964642 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:25:08.964653 kernel: Segment Routing with IPv6 Sep 11 00:25:08.964667 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:25:08.964677 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:25:08.964688 kernel: Key type dns_resolver registered Sep 11 00:25:08.964699 kernel: IPI shorthand broadcast: enabled Sep 11 00:25:08.964711 kernel: sched_clock: Marking stable (3966003998, 203004829)->(4229800808, -60791981) Sep 11 00:25:08.964722 kernel: registered taskstats version 1 Sep 11 00:25:08.964733 kernel: Loading compiled-in X.509 certificates Sep 11 00:25:08.964744 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 941433bdd955e1c3aa4064827516bddd510466ee' Sep 11 00:25:08.964755 kernel: Demotion targets for Node 0: null Sep 11 00:25:08.964768 kernel: Key type .fscrypt registered Sep 11 00:25:08.964778 kernel: Key type fscrypt-provisioning registered Sep 11 00:25:08.964790 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:25:08.964801 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:25:08.964812 kernel: ima: No architecture policies found Sep 11 00:25:08.964823 kernel: clk: Disabling unused clocks Sep 11 00:25:08.964834 kernel: Warning: unable to open an initial console. Sep 11 00:25:08.964846 kernel: Freeing unused kernel image (initmem) memory: 54036K Sep 11 00:25:08.964857 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:25:08.964869 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:25:08.964879 kernel: Run /init as init process Sep 11 00:25:08.964888 kernel: with arguments: Sep 11 00:25:08.964898 kernel: /init Sep 11 00:25:08.964908 kernel: with environment: Sep 11 00:25:08.964927 kernel: HOME=/ Sep 11 00:25:08.964937 kernel: TERM=linux Sep 11 00:25:08.964946 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:25:08.964957 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:25:08.964974 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:25:08.964985 systemd[1]: Detected virtualization kvm. Sep 11 00:25:08.964995 systemd[1]: Detected architecture x86-64. Sep 11 00:25:08.965005 systemd[1]: Running in initrd. Sep 11 00:25:08.965015 systemd[1]: No hostname configured, using default hostname. Sep 11 00:25:08.965026 systemd[1]: Hostname set to . Sep 11 00:25:08.965036 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:25:08.965050 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:25:08.965060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:25:08.965071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:25:08.965082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:25:08.965092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:25:08.965103 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:25:08.965114 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:25:08.965128 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:25:08.965138 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:25:08.965149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:25:08.965159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:25:08.965170 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:25:08.965180 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:25:08.965191 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:25:08.965201 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:25:08.965214 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:25:08.965224 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:25:08.965235 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:25:08.965245 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:25:08.965256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:25:08.965266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:25:08.965276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:25:08.965287 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:25:08.965297 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:25:08.965310 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:25:08.965320 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:25:08.965331 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:25:08.965341 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:25:08.965352 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:25:08.965362 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:25:08.965373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:25:08.965383 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:25:08.965396 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:25:08.965407 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:25:08.965419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:25:08.965470 systemd-journald[220]: Collecting audit messages is disabled. Sep 11 00:25:08.965498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:25:08.965523 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:25:08.965534 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:25:08.965545 systemd-journald[220]: Journal started Sep 11 00:25:08.965574 systemd-journald[220]: Runtime Journal (/run/log/journal/6d04a309854c4a4aa19c144e5228ace0) is 6M, max 48.4M, 42.4M free. Sep 11 00:25:08.950660 systemd-modules-load[221]: Inserted module 'overlay' Sep 11 00:25:08.971519 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:25:08.976208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:25:08.977199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:25:08.985557 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:25:08.988778 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 11 00:25:08.989526 kernel: Bridge firewalling registered Sep 11 00:25:08.990030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:25:08.991826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:25:08.992050 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:25:08.998903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:25:09.006061 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:25:09.006851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:25:09.008919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:25:09.014390 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:25:09.016651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:25:09.047824 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:25:09.067618 systemd-resolved[261]: Positive Trust Anchors: Sep 11 00:25:09.067641 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:25:09.067671 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:25:09.071062 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 11 00:25:09.073044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:25:09.077999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:25:09.166572 kernel: SCSI subsystem initialized Sep 11 00:25:09.175534 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:25:09.186535 kernel: iscsi: registered transport (tcp) Sep 11 00:25:09.216558 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:25:09.216637 kernel: QLogic iSCSI HBA Driver Sep 11 00:25:09.240126 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:25:09.273205 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:25:09.277672 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:25:09.373803 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:25:09.375583 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:25:09.437568 kernel: raid6: avx2x4 gen() 24949 MB/s Sep 11 00:25:09.454562 kernel: raid6: avx2x2 gen() 23789 MB/s Sep 11 00:25:09.471737 kernel: raid6: avx2x1 gen() 21719 MB/s Sep 11 00:25:09.471841 kernel: raid6: using algorithm avx2x4 gen() 24949 MB/s Sep 11 00:25:09.489691 kernel: raid6: .... xor() 7190 MB/s, rmw enabled Sep 11 00:25:09.489773 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:25:09.510549 kernel: xor: automatically using best checksumming function avx Sep 11 00:25:09.694551 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:25:09.704907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:25:09.708476 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:25:09.736045 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 11 00:25:09.742456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:25:09.746289 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:25:09.778971 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 11 00:25:09.815184 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:25:09.817526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:25:09.920606 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:25:09.923094 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:25:09.968767 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:25:10.047549 kernel: libata version 3.00 loaded. Sep 11 00:25:10.050833 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:25:10.051207 kernel: AES CTR mode by8 optimization enabled Sep 11 00:25:10.052447 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:25:10.056543 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:25:10.070159 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:25:10.070474 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:25:10.071141 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:25:10.072272 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:25:10.072470 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:25:10.075362 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:25:10.075413 kernel: GPT:9289727 != 19775487 Sep 11 00:25:10.075424 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:25:10.075435 kernel: GPT:9289727 != 19775487 Sep 11 00:25:10.076976 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:25:10.076998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:25:10.077869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:25:10.078063 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:25:10.082194 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:25:10.083623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:25:10.124374 kernel: scsi host0: ahci Sep 11 00:25:10.124648 kernel: scsi host1: ahci Sep 11 00:25:10.125017 kernel: scsi host2: ahci Sep 11 00:25:10.125223 kernel: scsi host3: ahci Sep 11 00:25:10.125575 kernel: scsi host4: ahci Sep 11 00:25:10.125740 kernel: scsi host5: ahci Sep 11 00:25:10.125913 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Sep 11 00:25:10.125932 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Sep 11 00:25:10.125944 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Sep 11 00:25:10.125956 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Sep 11 00:25:10.125968 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Sep 11 00:25:10.125980 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Sep 11 00:25:10.103139 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:25:10.134226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:25:10.134365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:25:10.143022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:25:10.156553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:25:10.187430 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:25:10.196239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:25:10.216191 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:25:10.226777 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:25:10.230209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:25:10.232595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:25:10.333979 disk-uuid[623]: Primary Header is updated. Sep 11 00:25:10.333979 disk-uuid[623]: Secondary Entries is updated. Sep 11 00:25:10.333979 disk-uuid[623]: Secondary Header is updated. Sep 11 00:25:10.337696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:25:10.420561 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:25:10.435561 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:25:10.435644 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:25:10.438202 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:25:10.438283 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:25:10.438296 kernel: ata3.00: applying bridge limits Sep 11 00:25:10.438307 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:25:10.439532 kernel: ata3.00: configured for UDMA/100 Sep 11 00:25:10.441544 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:25:10.444532 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:25:10.444565 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:25:10.445556 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:25:10.504564 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:25:10.504921 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:25:10.530565 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:25:10.830904 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:25:10.833029 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:25:10.835189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:25:10.837929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:25:10.841421 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:25:10.878581 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:25:11.347536 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:25:11.347888 disk-uuid[624]: The operation has completed successfully. Sep 11 00:25:11.375714 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:25:11.375863 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:25:11.416621 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:25:11.471671 sh[662]: Success Sep 11 00:25:11.508565 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:25:11.508651 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:25:11.508668 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:25:11.518553 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:25:11.550363 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:25:11.552889 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:25:11.576034 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:25:11.583331 kernel: BTRFS: device fsid 1d23f222-37c7-4ff5-813e-235ce83bed46 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (674) Sep 11 00:25:11.583365 kernel: BTRFS info (device dm-0): first mount of filesystem 1d23f222-37c7-4ff5-813e-235ce83bed46 Sep 11 00:25:11.583377 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:25:11.589239 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:25:11.589270 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:25:11.590763 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:25:11.622347 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:25:11.623718 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:25:11.625089 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:25:11.629137 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:25:11.657550 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Sep 11 00:25:11.660157 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:25:11.660191 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:25:11.663546 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:25:11.663584 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:25:11.668527 kernel: BTRFS info (device vda6): last unmount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:25:11.669493 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:25:11.672066 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:25:11.797307 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:25:11.803677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:25:11.823739 ignition[754]: Ignition 2.21.0 Sep 11 00:25:11.823751 ignition[754]: Stage: fetch-offline Sep 11 00:25:11.823783 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:25:11.823792 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:25:11.824574 ignition[754]: parsed url from cmdline: "" Sep 11 00:25:11.824580 ignition[754]: no config URL provided Sep 11 00:25:11.824589 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:25:11.824602 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:25:11.824639 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 11 00:25:11.824645 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:25:11.834955 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 11 00:25:11.865010 systemd-networkd[849]: lo: Link UP Sep 11 00:25:11.865020 systemd-networkd[849]: lo: Gained carrier Sep 11 00:25:11.866943 systemd-networkd[849]: Enumeration completed Sep 11 00:25:11.867313 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:25:11.867317 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:25:11.867751 systemd-networkd[849]: eth0: Link UP Sep 11 00:25:11.868543 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:25:11.868778 systemd-networkd[849]: eth0: Gained carrier Sep 11 00:25:11.868798 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:25:11.870496 systemd[1]: Reached target network.target - Network. Sep 11 00:25:11.882591 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:25:11.887557 ignition[754]: parsing config with SHA512: b51f2b1a505656ed489693345b0f5777d721a8b393f7e1052af6062e915eddfcb5617df52f8a9d1a911de7b2ed42bfeec8d566d7ca07a022bc58c86ac3962288 Sep 11 00:25:11.891540 unknown[754]: fetched base config from "system" Sep 11 00:25:11.891552 unknown[754]: fetched user config from "qemu" Sep 11 00:25:11.894565 ignition[754]: fetch-offline: fetch-offline passed Sep 11 00:25:11.894639 ignition[754]: Ignition finished successfully Sep 11 00:25:11.897686 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:25:11.899077 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:25:11.900011 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:25:11.946237 ignition[857]: Ignition 2.21.0 Sep 11 00:25:11.946249 ignition[857]: Stage: kargs Sep 11 00:25:11.946375 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:25:11.946385 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:25:11.947823 ignition[857]: kargs: kargs passed Sep 11 00:25:11.947895 ignition[857]: Ignition finished successfully Sep 11 00:25:11.953243 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:25:11.955549 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:25:11.987240 ignition[865]: Ignition 2.21.0 Sep 11 00:25:11.987255 ignition[865]: Stage: disks Sep 11 00:25:11.987408 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:25:11.987420 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:25:11.989197 ignition[865]: disks: disks passed Sep 11 00:25:11.989308 ignition[865]: Ignition finished successfully Sep 11 00:25:11.992927 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:25:11.995452 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:25:11.997576 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:25:11.997669 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:25:12.000012 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:25:12.001914 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:25:12.006537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:25:12.039645 systemd-resolved[261]: Detected conflict on linux IN A 10.0.0.132 Sep 11 00:25:12.039662 systemd-resolved[261]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Sep 11 00:25:12.043199 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:25:12.793543 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:25:12.802799 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:25:13.108558 kernel: EXT4-fs (vda9): mounted filesystem 8ebc908f-0860-41e2-beed-287b778bd592 r/w with ordered data mode. Quota mode: none. Sep 11 00:25:13.109587 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:25:13.110332 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:25:13.112302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:25:13.114338 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:25:13.115484 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:25:13.115545 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:25:13.115570 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:25:13.134083 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:25:13.136577 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:25:13.142527 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Sep 11 00:25:13.142554 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:25:13.142565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:25:13.147538 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:25:13.147575 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:25:13.149560 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:25:13.179787 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:25:13.186923 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:25:13.191772 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:25:13.198863 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:25:13.295239 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:25:13.296870 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:25:13.299061 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:25:13.325663 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:25:13.331104 kernel: BTRFS info (device vda6): last unmount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:25:13.349727 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:25:13.376568 ignition[998]: INFO : Ignition 2.21.0 Sep 11 00:25:13.376568 ignition[998]: INFO : Stage: mount Sep 11 00:25:13.376568 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:25:13.376568 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:25:13.381960 ignition[998]: INFO : mount: mount passed Sep 11 00:25:13.382818 ignition[998]: INFO : Ignition finished successfully Sep 11 00:25:13.386304 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:25:13.387444 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:25:13.714762 systemd-networkd[849]: eth0: Gained IPv6LL Sep 11 00:25:14.111607 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:25:14.148321 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Sep 11 00:25:14.148374 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:25:14.148390 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:25:14.152541 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:25:14.152569 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:25:14.154090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:25:14.192331 ignition[1026]: INFO : Ignition 2.21.0 Sep 11 00:25:14.192331 ignition[1026]: INFO : Stage: files Sep 11 00:25:14.194436 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:25:14.194436 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:25:14.194436 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:25:14.194436 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:25:14.194436 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:25:14.201439 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:25:14.201439 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:25:14.201439 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:25:14.201439 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:25:14.201439 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 11 00:25:14.197428 unknown[1026]: wrote ssh authorized keys file for user: core Sep 11 00:25:14.244834 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:25:14.625755 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:25:14.628218 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:25:14.681094 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:25:14.717768 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:25:14.717768 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:25:14.846531 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:25:14.846531 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:25:14.879391 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 11 00:25:15.426777 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:25:16.259244 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:25:16.259244 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:25:16.264163 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:25:16.383987 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:25:16.383987 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:25:16.389864 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 00:25:16.389864 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:25:16.389864 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:25:16.389864 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 00:25:16.389864 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:25:16.481100 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:25:16.492610 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:25:16.494814 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:25:16.494814 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:25:16.494814 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:25:16.494814 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:25:16.494814 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:25:16.494814 ignition[1026]: INFO : files: files passed Sep 11 00:25:16.494814 ignition[1026]: INFO : Ignition finished successfully Sep 11 00:25:16.509972 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:25:16.521325 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:25:16.524722 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:25:16.559543 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:25:16.559938 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:25:16.560095 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:25:16.571091 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:25:16.571091 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:25:16.575635 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:25:16.577410 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:25:16.587454 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:25:16.591369 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:25:16.677061 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:25:16.677245 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:25:16.692419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:25:16.694061 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:25:16.694449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:25:16.695791 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:25:16.730945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:25:16.733929 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:25:16.772345 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:25:16.773805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:25:16.776300 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:25:16.777696 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:25:16.777912 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:25:16.782051 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:25:16.784399 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:25:16.786412 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:25:16.788617 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:25:16.789848 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:25:16.794045 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:25:16.797143 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:25:16.800803 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:25:16.802743 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:25:16.805336 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:25:16.807847 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:25:16.809235 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:25:16.809400 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:25:16.813695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:25:16.814973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:25:16.816001 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:25:16.816162 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:25:16.818682 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:25:16.818899 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:25:16.822048 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:25:16.822229 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:25:16.824662 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:25:16.825014 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:25:16.828410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:25:16.829332 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:25:16.829798 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:25:16.830133 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:25:16.830229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:25:16.835922 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:25:16.836046 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:25:16.839609 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:25:16.839816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:25:16.840565 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:25:16.840719 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:25:16.843768 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:25:16.845463 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:25:16.847794 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:25:16.847963 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:25:16.850284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:25:16.850401 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:25:16.856961 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:25:16.857098 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:25:16.880598 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:25:16.881688 ignition[1081]: INFO : Ignition 2.21.0 Sep 11 00:25:16.881688 ignition[1081]: INFO : Stage: umount Sep 11 00:25:16.885189 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:25:16.885189 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:25:16.887294 ignition[1081]: INFO : umount: umount passed Sep 11 00:25:16.887294 ignition[1081]: INFO : Ignition finished successfully Sep 11 00:25:16.890433 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:25:16.891496 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:25:16.893702 systemd[1]: Stopped target network.target - Network. Sep 11 00:25:16.893785 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:25:16.893855 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:25:16.895460 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:25:16.895550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:25:16.897315 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:25:16.897382 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:25:16.899350 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:25:16.899410 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:25:16.900226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:25:16.904180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:25:16.935672 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:25:16.935840 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:25:16.940429 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:25:16.940788 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:25:16.940922 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:25:16.944468 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:25:16.945369 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:25:16.947113 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:25:16.947158 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:25:16.950045 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:25:16.951213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:25:16.951266 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:25:16.954029 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:25:16.954079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:25:16.956308 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:25:16.956362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:25:16.958274 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:25:16.958330 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:25:16.961469 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:25:16.965637 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:25:16.965736 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:25:16.977167 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:25:16.978786 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:25:16.981436 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:25:16.981523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:25:16.984392 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:25:16.984433 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:25:16.986276 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:25:16.986326 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:25:16.989929 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:25:16.989985 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:25:16.992554 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:25:16.992610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:25:16.996376 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:25:16.996433 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:25:16.996487 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:25:17.000745 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:25:17.000823 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:25:17.004254 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 00:25:17.004303 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:25:17.007678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:25:17.007751 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:25:17.009167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:25:17.009216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:25:17.014524 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:25:17.014583 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 11 00:25:17.014630 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:25:17.014679 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:25:17.015013 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:25:17.015121 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:25:17.024380 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:25:17.024545 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:25:17.509261 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:25:17.509414 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:25:17.512294 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:25:17.514300 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:25:17.514355 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:25:17.518050 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:25:17.548721 systemd[1]: Switching root. Sep 11 00:25:17.582314 systemd-journald[220]: Journal stopped Sep 11 00:25:19.734569 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 11 00:25:19.734670 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:25:19.734693 kernel: SELinux: policy capability open_perms=1 Sep 11 00:25:19.734708 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:25:19.734734 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:25:19.734754 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:25:19.734770 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:25:19.734784 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:25:19.734802 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:25:19.734831 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:25:19.734854 kernel: audit: type=1403 audit(1757550318.748:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:25:19.734871 systemd[1]: Successfully loaded SELinux policy in 131.898ms. Sep 11 00:25:19.734890 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.763ms. Sep 11 00:25:19.734908 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:25:19.734924 systemd[1]: Detected virtualization kvm. Sep 11 00:25:19.734939 systemd[1]: Detected architecture x86-64. Sep 11 00:25:19.734955 systemd[1]: Detected first boot. Sep 11 00:25:19.734978 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:25:19.734994 zram_generator::config[1126]: No configuration found. Sep 11 00:25:19.735017 kernel: Guest personality initialized and is inactive Sep 11 00:25:19.735032 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:25:19.735047 kernel: Initialized host personality Sep 11 00:25:19.735062 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:25:19.735077 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:25:19.735094 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:25:19.735110 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:25:19.735129 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:25:19.735145 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:25:19.735161 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:25:19.735176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:25:19.735205 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:25:19.735221 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:25:19.735237 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:25:19.735253 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:25:19.735273 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:25:19.735289 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:25:19.735304 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:25:19.735320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:25:19.735336 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:25:19.735352 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:25:19.735368 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:25:19.735384 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:25:19.735396 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:25:19.735419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:25:19.735436 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:25:19.735452 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:25:19.735466 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:25:19.735482 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:25:19.735499 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:25:19.735543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:25:19.735559 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:25:19.735592 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:25:19.735605 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:25:19.735626 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:25:19.735639 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:25:19.735651 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:25:19.735663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:25:19.735675 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:25:19.735688 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:25:19.735700 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:25:19.735715 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:25:19.735727 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:25:19.735739 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:25:19.735757 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:19.735769 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:25:19.735781 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:25:19.735793 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:25:19.735806 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:25:19.735820 systemd[1]: Reached target machines.target - Containers. Sep 11 00:25:19.735833 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:25:19.735845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:25:19.735857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:25:19.735878 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:25:19.735901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:25:19.735921 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:25:19.735937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:25:19.735952 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:25:19.735973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:25:19.735989 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:25:19.736005 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:25:19.736020 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:25:19.736036 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:25:19.736052 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:25:19.736068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:25:19.736084 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:25:19.736104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:25:19.736120 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:25:19.736136 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:25:19.736152 kernel: loop: module loaded Sep 11 00:25:19.736167 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:25:19.736187 kernel: fuse: init (API version 7.41) Sep 11 00:25:19.736205 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:25:19.736221 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:25:19.736251 systemd[1]: Stopped verity-setup.service. Sep 11 00:25:19.736268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:19.736316 systemd-journald[1191]: Collecting audit messages is disabled. Sep 11 00:25:19.736351 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:25:19.736369 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:25:19.736385 systemd-journald[1191]: Journal started Sep 11 00:25:19.736421 systemd-journald[1191]: Runtime Journal (/run/log/journal/6d04a309854c4a4aa19c144e5228ace0) is 6M, max 48.4M, 42.4M free. Sep 11 00:25:19.461477 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:25:19.485729 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:25:19.486232 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:25:19.737583 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:25:19.741248 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:25:19.745555 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:25:19.746826 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:25:19.748079 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:25:19.749412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:25:19.751012 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:25:19.766089 kernel: ACPI: bus type drm_connector registered Sep 11 00:25:19.778947 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:25:19.781136 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:25:19.781460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:25:19.783121 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:25:19.783405 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:25:19.785032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:25:19.785319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:25:19.786956 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:25:19.787174 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:25:19.788874 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:25:19.789167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:25:19.790818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:25:19.803601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:25:19.805314 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:25:19.807077 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:25:19.820781 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:25:19.823555 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:25:19.826739 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:25:19.828048 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:25:19.828087 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:25:19.830989 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:25:19.839840 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:25:19.841164 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:25:19.867913 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:25:19.872951 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:25:19.874520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:25:19.876314 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:25:19.877796 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:25:19.883142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:25:19.887683 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:25:19.890334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:25:19.893558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:25:19.895063 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:25:19.897213 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:25:20.026615 systemd-journald[1191]: Time spent on flushing to /var/log/journal/6d04a309854c4a4aa19c144e5228ace0 is 19.486ms for 1077 entries. Sep 11 00:25:20.026615 systemd-journald[1191]: System Journal (/var/log/journal/6d04a309854c4a4aa19c144e5228ace0) is 8M, max 195.6M, 187.6M free. Sep 11 00:25:20.288354 systemd-journald[1191]: Received client request to flush runtime journal. Sep 11 00:25:20.288445 kernel: loop0: detected capacity change from 0 to 128016 Sep 11 00:25:20.288486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:25:20.288529 kernel: loop1: detected capacity change from 0 to 111000 Sep 11 00:25:20.288557 kernel: loop2: detected capacity change from 0 to 221472 Sep 11 00:25:20.042934 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:25:20.076914 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 11 00:25:20.076930 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Sep 11 00:25:20.081129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:25:20.095421 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:25:20.100247 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:25:20.165616 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:25:20.167367 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:25:20.171673 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:25:20.285986 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:25:20.290271 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:25:20.292562 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:25:20.455808 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 11 00:25:20.455827 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 11 00:25:20.460488 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:25:20.494537 kernel: loop3: detected capacity change from 0 to 128016 Sep 11 00:25:20.509565 kernel: loop4: detected capacity change from 0 to 111000 Sep 11 00:25:20.550541 kernel: loop5: detected capacity change from 0 to 221472 Sep 11 00:25:20.561870 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:25:20.562553 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 11 00:25:20.587824 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:25:20.588618 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:25:20.593169 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:25:20.593184 systemd[1]: Reloading... Sep 11 00:25:20.733538 zram_generator::config[1302]: No configuration found. Sep 11 00:25:20.761149 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:25:20.959741 systemd[1]: Reloading finished in 366 ms. Sep 11 00:25:20.980929 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:25:20.982721 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:25:21.011655 systemd[1]: Starting ensure-sysext.service... Sep 11 00:25:21.014306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:25:21.026634 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:25:21.026656 systemd[1]: Reloading... Sep 11 00:25:21.043309 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:25:21.043796 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:25:21.044166 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:25:21.044547 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:25:21.045531 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:25:21.045921 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 11 00:25:21.045998 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 11 00:25:21.081543 zram_generator::config[1365]: No configuration found. Sep 11 00:25:21.156521 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:25:21.156560 systemd-tmpfiles[1338]: Skipping /boot Sep 11 00:25:21.171751 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:25:21.171775 systemd-tmpfiles[1338]: Skipping /boot Sep 11 00:25:21.318052 systemd[1]: Reloading finished in 290 ms. Sep 11 00:25:21.379932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:21.380123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:25:21.381483 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:25:21.383672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:25:21.386294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:25:21.387478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:25:21.387734 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:25:21.387946 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:21.390650 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:21.390898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:25:21.391119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:25:21.391269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:25:21.391425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:21.394551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:21.394789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:25:21.397480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:25:21.399380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:25:21.399557 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:25:21.399765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:25:21.401196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:25:21.401501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:25:21.403791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:25:21.404007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:25:21.405825 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:25:21.406042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:25:21.407725 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:25:21.407940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:25:21.428104 systemd[1]: Finished ensure-sysext.service. Sep 11 00:25:21.431971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:25:21.432032 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:25:21.526471 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:25:21.556951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:25:21.578266 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:25:21.587540 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:25:21.603682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:25:21.608719 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:25:21.615769 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:25:21.631308 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:25:21.636935 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:25:21.649639 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:25:21.660148 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:25:21.669924 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:25:21.677187 augenrules[1442]: No rules Sep 11 00:25:21.678069 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:25:21.681623 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:25:21.682038 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:25:21.691634 systemd-udevd[1432]: Using default interface naming scheme 'v255'. Sep 11 00:25:21.709371 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:25:21.719034 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:25:21.721631 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:25:21.725619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:25:21.728141 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:25:21.733218 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:25:21.846182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:25:21.854399 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:25:21.907583 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:25:21.932074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:25:21.947975 systemd-networkd[1464]: lo: Link UP Sep 11 00:25:21.947988 systemd-networkd[1464]: lo: Gained carrier Sep 11 00:25:21.950331 systemd-networkd[1464]: Enumeration completed Sep 11 00:25:21.950470 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:25:21.954785 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:25:21.957533 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:25:21.957568 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:25:21.958674 systemd-networkd[1464]: eth0: Link UP Sep 11 00:25:21.958813 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:25:21.960081 systemd-networkd[1464]: eth0: Gained carrier Sep 11 00:25:21.960105 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:25:22.011545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:25:22.013549 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:25:22.012593 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:25:22.033489 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:25:22.035536 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:25:22.037348 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:25:23.076913 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:25:23.077305 systemd-timesyncd[1424]: Initial clock synchronization to Thu 2025-09-11 00:25:23.076737 UTC. Sep 11 00:25:23.078101 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 11 00:25:23.078374 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:25:23.078581 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:25:23.081512 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:25:23.097048 systemd-resolved[1422]: Positive Trust Anchors: Sep 11 00:25:23.097068 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:25:23.097114 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:25:23.103260 systemd-resolved[1422]: Defaulting to hostname 'linux'. Sep 11 00:25:23.105382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:25:23.107405 systemd[1]: Reached target network.target - Network. Sep 11 00:25:23.108400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:25:23.109801 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:25:23.111803 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:25:23.113301 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:25:23.114783 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:25:23.116782 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:25:23.118150 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:25:23.120571 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:25:23.122421 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:25:23.122495 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:25:23.124541 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:25:23.126771 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:25:23.131924 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:25:23.139801 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:25:23.142056 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:25:23.143866 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:25:23.156550 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:25:23.158356 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:25:23.160452 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:25:23.164091 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:25:23.166532 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:25:23.167615 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:25:23.167652 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:25:23.169691 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:25:23.173665 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:25:23.176672 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:25:23.179202 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:25:23.183131 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:25:23.186846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:25:23.194613 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:25:23.197656 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:25:23.201647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:25:23.206557 jq[1524]: false Sep 11 00:25:23.209154 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:25:23.215026 extend-filesystems[1525]: Found /dev/vda6 Sep 11 00:25:23.215304 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:25:23.224418 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing passwd entry cache Sep 11 00:25:23.216400 oslogin_cache_refresh[1526]: Refreshing passwd entry cache Sep 11 00:25:23.222681 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:25:23.226631 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:25:23.228851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:25:23.234447 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting users, quitting Sep 11 00:25:23.234447 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:25:23.234447 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing group entry cache Sep 11 00:25:23.230817 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:25:23.230709 oslogin_cache_refresh[1526]: Failure getting users, quitting Sep 11 00:25:23.230743 oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:25:23.230807 oslogin_cache_refresh[1526]: Refreshing group entry cache Sep 11 00:25:23.238335 extend-filesystems[1525]: Found /dev/vda9 Sep 11 00:25:23.240119 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting groups, quitting Sep 11 00:25:23.240119 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:25:23.239713 oslogin_cache_refresh[1526]: Failure getting groups, quitting Sep 11 00:25:23.239727 oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:25:23.242918 extend-filesystems[1525]: Checking size of /dev/vda9 Sep 11 00:25:23.264895 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:25:23.270005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:25:23.271981 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:25:23.272248 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:25:23.272662 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:25:23.272955 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:25:23.274873 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:25:23.275130 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:25:23.278138 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:25:23.278404 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:25:23.286886 update_engine[1536]: I20250911 00:25:23.286788 1536 main.cc:92] Flatcar Update Engine starting Sep 11 00:25:23.305231 jq[1537]: true Sep 11 00:25:23.316053 tar[1548]: linux-amd64/helm Sep 11 00:25:23.320345 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:25:23.320465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:25:23.323262 jq[1561]: true Sep 11 00:25:23.374540 extend-filesystems[1525]: Resized partition /dev/vda9 Sep 11 00:25:23.423814 extend-filesystems[1582]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:25:23.428515 dbus-daemon[1520]: [system] SELinux support is enabled Sep 11 00:25:23.429519 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:25:23.434046 update_engine[1536]: I20250911 00:25:23.433629 1536 update_check_scheduler.cc:74] Next update check in 5m40s Sep 11 00:25:23.437659 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:25:23.437690 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:25:23.439111 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:25:23.439134 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:25:23.440391 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:25:23.445734 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:25:23.471482 kernel: kvm_amd: TSC scaling supported Sep 11 00:25:23.471554 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:25:23.471568 kernel: kvm_amd: Nested Paging enabled Sep 11 00:25:23.471580 kernel: kvm_amd: LBR virtualization supported Sep 11 00:25:23.472610 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:25:23.472640 kernel: kvm_amd: Virtual GIF supported Sep 11 00:25:23.552475 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:25:23.555112 systemd-logind[1535]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:25:23.555141 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:25:23.555407 systemd-logind[1535]: New seat seat0. Sep 11 00:25:23.558371 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:25:23.730993 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:25:23.950684 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:25:24.012491 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:25:24.042085 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:25:24.046357 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:25:24.048899 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:53276.service - OpenSSH per-connection server daemon (10.0.0.1:53276). Sep 11 00:25:24.054317 tar[1548]: linux-amd64/LICENSE Sep 11 00:25:24.054470 tar[1548]: linux-amd64/README.md Sep 11 00:25:24.073157 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:25:24.099009 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:25:24.099340 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:25:24.101712 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:25:24.134470 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:25:24.137541 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:25:24.139635 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:25:24.286509 containerd[1558]: time="2025-09-11T00:25:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:25:24.287030 sshd[1605]: Connection closed by authenticating user core 10.0.0.1 port 53276 [preauth] Sep 11 00:25:24.142039 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:25:24.145716 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:25:24.145895 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:25:24.168345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:25:24.287417 containerd[1558]: time="2025-09-11T00:25:24.287288371Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 11 00:25:24.207813 systemd[1]: sshd@0-10.0.0.132:22-10.0.0.1:53276.service: Deactivated successfully. Sep 11 00:25:24.287931 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:25:24.287931 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:25:24.287931 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:25:24.292688 extend-filesystems[1525]: Resized filesystem in /dev/vda9 Sep 11 00:25:24.291108 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:25:24.291419 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:25:24.299867 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:25:24.302075 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:25:24.304486 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:25:24.305833 containerd[1558]: time="2025-09-11T00:25:24.305759608Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.547µs" Sep 11 00:25:24.305833 containerd[1558]: time="2025-09-11T00:25:24.305816534Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:25:24.305961 containerd[1558]: time="2025-09-11T00:25:24.305846320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:25:24.306169 containerd[1558]: time="2025-09-11T00:25:24.306127658Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:25:24.306169 containerd[1558]: time="2025-09-11T00:25:24.306156512Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:25:24.306232 containerd[1558]: time="2025-09-11T00:25:24.306196888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:25:24.306329 containerd[1558]: time="2025-09-11T00:25:24.306291165Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:25:24.306329 containerd[1558]: time="2025-09-11T00:25:24.306313016Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:25:24.306791 containerd[1558]: time="2025-09-11T00:25:24.306738874Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:25:24.306791 containerd[1558]: time="2025-09-11T00:25:24.306768269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:25:24.306791 containerd[1558]: time="2025-09-11T00:25:24.306783408Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:25:24.306791 containerd[1558]: time="2025-09-11T00:25:24.306794288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:25:24.306948 containerd[1558]: time="2025-09-11T00:25:24.306923330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:25:24.307281 containerd[1558]: time="2025-09-11T00:25:24.307235255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:25:24.307331 containerd[1558]: time="2025-09-11T00:25:24.307283195Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:25:24.307331 containerd[1558]: time="2025-09-11T00:25:24.307299155Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:25:24.307393 containerd[1558]: time="2025-09-11T00:25:24.307354108Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:25:24.308808 containerd[1558]: time="2025-09-11T00:25:24.307937913Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:25:24.308808 containerd[1558]: time="2025-09-11T00:25:24.308265287Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:25:24.315745 containerd[1558]: time="2025-09-11T00:25:24.315652253Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315768712Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315799039Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315816842Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315833233Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315848552Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315865784Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315884840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315899748Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315912682Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315924314Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:25:24.315940 containerd[1558]: time="2025-09-11T00:25:24.315938009Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:25:24.316215 containerd[1558]: time="2025-09-11T00:25:24.316191114Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:25:24.316245 containerd[1558]: time="2025-09-11T00:25:24.316229256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:25:24.316278 containerd[1558]: time="2025-09-11T00:25:24.316250105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:25:24.316278 containerd[1558]: time="2025-09-11T00:25:24.316264181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:25:24.316278 containerd[1558]: time="2025-09-11T00:25:24.316277426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:25:24.316353 containerd[1558]: time="2025-09-11T00:25:24.316291913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:25:24.316353 containerd[1558]: time="2025-09-11T00:25:24.316306310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:25:24.316353 containerd[1558]: time="2025-09-11T00:25:24.316317872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:25:24.316353 containerd[1558]: time="2025-09-11T00:25:24.316328782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:25:24.316353 containerd[1558]: time="2025-09-11T00:25:24.316350182Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:25:24.316536 containerd[1558]: time="2025-09-11T00:25:24.316367064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:25:24.316536 containerd[1558]: time="2025-09-11T00:25:24.316494673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:25:24.316536 containerd[1558]: time="2025-09-11T00:25:24.316516945Z" level=info msg="Start snapshots syncer" Sep 11 00:25:24.316621 containerd[1558]: time="2025-09-11T00:25:24.316559064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:25:24.316925 containerd[1558]: time="2025-09-11T00:25:24.316851332Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:25:24.317072 containerd[1558]: time="2025-09-11T00:25:24.316934599Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:25:24.317072 containerd[1558]: time="2025-09-11T00:25:24.317055485Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:25:24.317232 containerd[1558]: time="2025-09-11T00:25:24.317193334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:25:24.317273 containerd[1558]: time="2025-09-11T00:25:24.317232347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:25:24.317273 containerd[1558]: time="2025-09-11T00:25:24.317249379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:25:24.317273 containerd[1558]: time="2025-09-11T00:25:24.317264357Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:25:24.317350 containerd[1558]: time="2025-09-11T00:25:24.317311736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:25:24.317350 containerd[1558]: time="2025-09-11T00:25:24.317330882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:25:24.317401 containerd[1558]: time="2025-09-11T00:25:24.317352032Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:25:24.317401 containerd[1558]: time="2025-09-11T00:25:24.317384653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:25:24.317493 containerd[1558]: time="2025-09-11T00:25:24.317400372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:25:24.317493 containerd[1558]: time="2025-09-11T00:25:24.317416843Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:25:24.317543 containerd[1558]: time="2025-09-11T00:25:24.317505770Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:25:24.317543 containerd[1558]: time="2025-09-11T00:25:24.317527481Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:25:24.317543 containerd[1558]: time="2025-09-11T00:25:24.317540175Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:25:24.317615 containerd[1558]: time="2025-09-11T00:25:24.317553009Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:25:24.317615 containerd[1558]: time="2025-09-11T00:25:24.317564901Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:25:24.317615 containerd[1558]: time="2025-09-11T00:25:24.317577545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:25:24.317615 containerd[1558]: time="2025-09-11T00:25:24.317595188Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:25:24.317713 containerd[1558]: time="2025-09-11T00:25:24.317633099Z" level=info msg="runtime interface created" Sep 11 00:25:24.317713 containerd[1558]: time="2025-09-11T00:25:24.317643138Z" level=info msg="created NRI interface" Sep 11 00:25:24.317713 containerd[1558]: time="2025-09-11T00:25:24.317666371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:25:24.317713 containerd[1558]: time="2025-09-11T00:25:24.317685287Z" level=info msg="Connect containerd service" Sep 11 00:25:24.317810 containerd[1558]: time="2025-09-11T00:25:24.317715022Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:25:24.318881 containerd[1558]: time="2025-09-11T00:25:24.318832428Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:25:24.416232 containerd[1558]: time="2025-09-11T00:25:24.416096107Z" level=info msg="Start subscribing containerd event" Sep 11 00:25:24.416232 containerd[1558]: time="2025-09-11T00:25:24.416171479Z" level=info msg="Start recovering state" Sep 11 00:25:24.416402 containerd[1558]: time="2025-09-11T00:25:24.416360724Z" level=info msg="Start event monitor" Sep 11 00:25:24.416402 containerd[1558]: time="2025-09-11T00:25:24.416375852Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:25:24.416501 containerd[1558]: time="2025-09-11T00:25:24.416381332Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:25:24.416501 containerd[1558]: time="2025-09-11T00:25:24.416462875Z" level=info msg="Start streaming server" Sep 11 00:25:24.416501 containerd[1558]: time="2025-09-11T00:25:24.416476070Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:25:24.416501 containerd[1558]: time="2025-09-11T00:25:24.416485397Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:25:24.416595 containerd[1558]: time="2025-09-11T00:25:24.416486069Z" level=info msg="runtime interface starting up..." Sep 11 00:25:24.416595 containerd[1558]: time="2025-09-11T00:25:24.416580506Z" level=info msg="starting plugins..." Sep 11 00:25:24.416660 containerd[1558]: time="2025-09-11T00:25:24.416603249Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:25:24.416817 containerd[1558]: time="2025-09-11T00:25:24.416779539Z" level=info msg="containerd successfully booted in 0.203984s" Sep 11 00:25:24.416957 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:25:24.799880 systemd-networkd[1464]: eth0: Gained IPv6LL Sep 11 00:25:24.804780 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:25:24.807559 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:25:24.811278 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:25:24.815112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:25:24.854204 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:25:24.911054 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:25:24.911377 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:25:24.913417 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:25:24.915701 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:25:26.475971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:25:26.478842 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:25:26.480446 systemd[1]: Startup finished in 4.030s (kernel) + 10.061s (initrd) + 6.794s (userspace) = 20.886s. Sep 11 00:25:26.496249 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:25:27.166469 kubelet[1670]: E0911 00:25:27.166270 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:25:27.171136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:25:27.171472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:25:27.172044 systemd[1]: kubelet.service: Consumed 2.071s CPU time, 266.2M memory peak. Sep 11 00:25:34.216157 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). Sep 11 00:25:34.258547 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:25:34.260286 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:25:34.266881 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:25:34.268063 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:25:34.274261 systemd-logind[1535]: New session 1 of user core. Sep 11 00:25:34.289308 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:25:34.292742 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:25:34.311069 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:25:34.313675 systemd-logind[1535]: New session c1 of user core. Sep 11 00:25:34.459967 systemd[1689]: Queued start job for default target default.target. Sep 11 00:25:34.468899 systemd[1689]: Created slice app.slice - User Application Slice. Sep 11 00:25:34.468930 systemd[1689]: Reached target paths.target - Paths. Sep 11 00:25:34.468977 systemd[1689]: Reached target timers.target - Timers. Sep 11 00:25:34.470625 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:25:34.482530 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:25:34.482691 systemd[1689]: Reached target sockets.target - Sockets. Sep 11 00:25:34.482745 systemd[1689]: Reached target basic.target - Basic System. Sep 11 00:25:34.482806 systemd[1689]: Reached target default.target - Main User Target. Sep 11 00:25:34.482848 systemd[1689]: Startup finished in 162ms. Sep 11 00:25:34.483258 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:25:34.485332 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:25:34.550103 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:46366.service - OpenSSH per-connection server daemon (10.0.0.1:46366). Sep 11 00:25:34.598498 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 46366 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:25:34.599890 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:25:34.604923 systemd-logind[1535]: New session 2 of user core. Sep 11 00:25:34.617715 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:25:34.672802 sshd[1703]: Connection closed by 10.0.0.1 port 46366 Sep 11 00:25:34.673369 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 11 00:25:34.688099 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:46366.service: Deactivated successfully. Sep 11 00:25:34.690574 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:25:34.691507 systemd-logind[1535]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:25:34.695367 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:46372.service - OpenSSH per-connection server daemon (10.0.0.1:46372). Sep 11 00:25:34.696135 systemd-logind[1535]: Removed session 2. Sep 11 00:25:34.755043 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 46372 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:25:34.756823 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:25:34.762284 systemd-logind[1535]: New session 3 of user core. Sep 11 00:25:34.777630 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:25:34.828364 sshd[1712]: Connection closed by 10.0.0.1 port 46372 Sep 11 00:25:34.828808 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 11 00:25:34.837298 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:46372.service: Deactivated successfully. Sep 11 00:25:34.839208 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:25:34.840105 systemd-logind[1535]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:25:34.842677 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:46378.service - OpenSSH per-connection server daemon (10.0.0.1:46378). Sep 11 00:25:34.843456 systemd-logind[1535]: Removed session 3. Sep 11 00:25:34.903138 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:25:34.904978 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:25:34.909933 systemd-logind[1535]: New session 4 of user core. Sep 11 00:25:34.923656 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:25:34.979561 sshd[1721]: Connection closed by 10.0.0.1 port 46378 Sep 11 00:25:34.980031 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Sep 11 00:25:34.992697 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:46378.service: Deactivated successfully. Sep 11 00:25:34.994558 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:25:34.995457 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:25:34.998440 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:46388.service - OpenSSH per-connection server daemon (10.0.0.1:46388). Sep 11 00:25:34.999343 systemd-logind[1535]: Removed session 4. Sep 11 00:25:35.057907 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 46388 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:25:35.059805 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:25:35.066132 systemd-logind[1535]: New session 5 of user core. Sep 11 00:25:35.086603 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:25:35.149271 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:25:35.149738 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:25:35.170332 sudo[1731]: pam_unix(sudo:session): session closed for user root Sep 11 00:25:35.172269 sshd[1730]: Connection closed by 10.0.0.1 port 46388 Sep 11 00:25:35.172750 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 11 00:25:35.182867 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:46388.service: Deactivated successfully. Sep 11 00:25:35.184903 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:25:35.185766 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:25:35.189496 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:46394.service - OpenSSH per-connection server daemon (10.0.0.1:46394). Sep 11 00:25:35.190304 systemd-logind[1535]: Removed session 5. Sep 11 00:25:35.250915 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 46394 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:25:35.252896 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:25:35.257800 systemd-logind[1535]: New session 6 of user core. Sep 11 00:25:35.278621 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:25:35.334037 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:25:35.334521 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:25:35.341788 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 11 00:25:35.350021 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:25:35.350464 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:25:35.361411 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:25:35.407437 augenrules[1764]: No rules Sep 11 00:25:35.409106 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:25:35.409383 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:25:35.410758 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 11 00:25:35.412331 sshd[1740]: Connection closed by 10.0.0.1 port 46394 Sep 11 00:25:35.412681 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Sep 11 00:25:35.423950 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:46394.service: Deactivated successfully. Sep 11 00:25:35.425585 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:25:35.426344 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:25:35.428670 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:46404.service - OpenSSH per-connection server daemon (10.0.0.1:46404). Sep 11 00:25:35.429203 systemd-logind[1535]: Removed session 6. Sep 11 00:25:35.485289 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 46404 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:25:35.487225 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:25:35.491842 systemd-logind[1535]: New session 7 of user core. Sep 11 00:25:35.501685 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:25:35.555631 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:25:35.555977 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:25:35.862638 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:25:35.884927 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:25:36.229250 dockerd[1798]: time="2025-09-11T00:25:36.229201797Z" level=info msg="Starting up" Sep 11 00:25:36.230031 dockerd[1798]: time="2025-09-11T00:25:36.230008430Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:25:36.256372 dockerd[1798]: time="2025-09-11T00:25:36.256301228Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 11 00:25:36.316810 dockerd[1798]: time="2025-09-11T00:25:36.316733531Z" level=info msg="Loading containers: start." Sep 11 00:25:36.327637 kernel: Initializing XFRM netlink socket Sep 11 00:25:36.731716 systemd-networkd[1464]: docker0: Link UP Sep 11 00:25:36.737961 dockerd[1798]: time="2025-09-11T00:25:36.737914457Z" level=info msg="Loading containers: done." Sep 11 00:25:36.757248 dockerd[1798]: time="2025-09-11T00:25:36.757203197Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:25:36.757419 dockerd[1798]: time="2025-09-11T00:25:36.757284469Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 11 00:25:36.757486 dockerd[1798]: time="2025-09-11T00:25:36.757442936Z" level=info msg="Initializing buildkit" Sep 11 00:25:36.792886 dockerd[1798]: time="2025-09-11T00:25:36.792839430Z" level=info msg="Completed buildkit initialization" Sep 11 00:25:36.797108 dockerd[1798]: time="2025-09-11T00:25:36.797055037Z" level=info msg="Daemon has completed initialization" Sep 11 00:25:36.797183 dockerd[1798]: time="2025-09-11T00:25:36.797131621Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:25:36.797360 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:25:37.229760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:25:37.231826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:25:37.494928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:25:37.505761 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:25:37.549742 kubelet[2024]: E0911 00:25:37.549608 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:25:37.556780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:25:37.557007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:25:37.557438 systemd[1]: kubelet.service: Consumed 286ms CPU time, 111.1M memory peak. Sep 11 00:25:37.748939 containerd[1558]: time="2025-09-11T00:25:37.748795691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 11 00:25:38.742452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066985808.mount: Deactivated successfully. Sep 11 00:25:39.641612 containerd[1558]: time="2025-09-11T00:25:39.641538355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:39.642542 containerd[1558]: time="2025-09-11T00:25:39.642477236Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 11 00:25:39.645440 containerd[1558]: time="2025-09-11T00:25:39.643748851Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:39.650318 containerd[1558]: time="2025-09-11T00:25:39.650274742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:39.651183 containerd[1558]: time="2025-09-11T00:25:39.651130297Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.902269203s" Sep 11 00:25:39.651183 containerd[1558]: time="2025-09-11T00:25:39.651181102Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 11 00:25:39.652008 containerd[1558]: time="2025-09-11T00:25:39.651911382Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 11 00:25:40.752417 containerd[1558]: time="2025-09-11T00:25:40.752365209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:40.753148 containerd[1558]: time="2025-09-11T00:25:40.753119353Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 11 00:25:40.754200 containerd[1558]: time="2025-09-11T00:25:40.754154675Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:40.756732 containerd[1558]: time="2025-09-11T00:25:40.756697815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:40.757468 containerd[1558]: time="2025-09-11T00:25:40.757439757Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.105429891s" Sep 11 00:25:40.757468 containerd[1558]: time="2025-09-11T00:25:40.757466607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 11 00:25:40.758042 containerd[1558]: time="2025-09-11T00:25:40.757976664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 11 00:25:42.728113 containerd[1558]: time="2025-09-11T00:25:42.727756588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:42.740823 containerd[1558]: time="2025-09-11T00:25:42.740747896Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 11 00:25:42.742232 containerd[1558]: time="2025-09-11T00:25:42.742183810Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:42.745651 containerd[1558]: time="2025-09-11T00:25:42.745601039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:42.747671 containerd[1558]: time="2025-09-11T00:25:42.746869549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.988824606s" Sep 11 00:25:42.747671 containerd[1558]: time="2025-09-11T00:25:42.746904524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 11 00:25:42.749453 containerd[1558]: time="2025-09-11T00:25:42.749391098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 11 00:25:43.739772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695448952.mount: Deactivated successfully. Sep 11 00:25:44.736684 containerd[1558]: time="2025-09-11T00:25:44.736606717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:44.738457 containerd[1558]: time="2025-09-11T00:25:44.738379763Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 11 00:25:44.740238 containerd[1558]: time="2025-09-11T00:25:44.740186021Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:44.744229 containerd[1558]: time="2025-09-11T00:25:44.744154775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:44.744597 containerd[1558]: time="2025-09-11T00:25:44.744532063Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.995066965s" Sep 11 00:25:44.744597 containerd[1558]: time="2025-09-11T00:25:44.744587787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 11 00:25:44.745373 containerd[1558]: time="2025-09-11T00:25:44.745131557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:25:45.314725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683370029.mount: Deactivated successfully. Sep 11 00:25:46.016241 containerd[1558]: time="2025-09-11T00:25:46.016166876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:46.017103 containerd[1558]: time="2025-09-11T00:25:46.017040204Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:25:46.018344 containerd[1558]: time="2025-09-11T00:25:46.018305007Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:46.020928 containerd[1558]: time="2025-09-11T00:25:46.020864317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:46.021876 containerd[1558]: time="2025-09-11T00:25:46.021831351Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.276669577s" Sep 11 00:25:46.021876 containerd[1558]: time="2025-09-11T00:25:46.021874762Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:25:46.022655 containerd[1558]: time="2025-09-11T00:25:46.022606084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:25:46.472044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390982076.mount: Deactivated successfully. Sep 11 00:25:46.478588 containerd[1558]: time="2025-09-11T00:25:46.478534435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:25:46.479293 containerd[1558]: time="2025-09-11T00:25:46.479237705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:25:46.480685 containerd[1558]: time="2025-09-11T00:25:46.480632571Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:25:46.483145 containerd[1558]: time="2025-09-11T00:25:46.483090541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:25:46.483920 containerd[1558]: time="2025-09-11T00:25:46.483867529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 461.215629ms" Sep 11 00:25:46.483920 containerd[1558]: time="2025-09-11T00:25:46.483909678Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:25:46.484965 containerd[1558]: time="2025-09-11T00:25:46.484611384Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 11 00:25:47.693304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:25:47.694747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:25:47.705392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126271345.mount: Deactivated successfully. Sep 11 00:25:47.978875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:25:48.005291 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:25:48.639628 kubelet[2177]: E0911 00:25:48.639488 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:25:48.644241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:25:48.644487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:25:48.645236 systemd[1]: kubelet.service: Consumed 866ms CPU time, 110.6M memory peak. Sep 11 00:25:51.486063 containerd[1558]: time="2025-09-11T00:25:51.485974331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:51.486854 containerd[1558]: time="2025-09-11T00:25:51.486774212Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 11 00:25:51.488095 containerd[1558]: time="2025-09-11T00:25:51.488050065Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:51.491456 containerd[1558]: time="2025-09-11T00:25:51.491389569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:25:51.492656 containerd[1558]: time="2025-09-11T00:25:51.492607413Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.007963048s" Sep 11 00:25:51.492656 containerd[1558]: time="2025-09-11T00:25:51.492653389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 11 00:25:53.781266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:25:53.781507 systemd[1]: kubelet.service: Consumed 866ms CPU time, 110.6M memory peak. Sep 11 00:25:53.784102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:25:53.814384 systemd[1]: Reload requested from client PID 2266 ('systemctl') (unit session-7.scope)... Sep 11 00:25:53.814411 systemd[1]: Reloading... Sep 11 00:25:53.902476 zram_generator::config[2309]: No configuration found. Sep 11 00:25:54.262477 systemd[1]: Reloading finished in 447 ms. Sep 11 00:25:54.348554 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:25:54.348701 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:25:54.349092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:25:54.349152 systemd[1]: kubelet.service: Consumed 194ms CPU time, 98.2M memory peak. Sep 11 00:25:54.351272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:25:54.566686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:25:54.586984 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:25:54.630394 kubelet[2357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:25:54.630394 kubelet[2357]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:25:54.630394 kubelet[2357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:25:54.630864 kubelet[2357]: I0911 00:25:54.630497 2357 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:25:54.823010 kubelet[2357]: I0911 00:25:54.822854 2357 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:25:54.823010 kubelet[2357]: I0911 00:25:54.822897 2357 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:25:54.824885 kubelet[2357]: I0911 00:25:54.823646 2357 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:25:54.845725 kubelet[2357]: E0911 00:25:54.845643 2357 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:54.848221 kubelet[2357]: I0911 00:25:54.848163 2357 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:25:54.855135 kubelet[2357]: I0911 00:25:54.855106 2357 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:25:54.861540 kubelet[2357]: I0911 00:25:54.861510 2357 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:25:54.862114 kubelet[2357]: I0911 00:25:54.862084 2357 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:25:54.862265 kubelet[2357]: I0911 00:25:54.862225 2357 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:25:54.862482 kubelet[2357]: I0911 00:25:54.862254 2357 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:25:54.862665 kubelet[2357]: I0911 00:25:54.862493 2357 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:25:54.862665 kubelet[2357]: I0911 00:25:54.862503 2357 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:25:54.862665 kubelet[2357]: I0911 00:25:54.862637 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:25:54.864511 kubelet[2357]: I0911 00:25:54.864490 2357 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:25:54.864566 kubelet[2357]: I0911 00:25:54.864516 2357 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:25:54.864566 kubelet[2357]: I0911 00:25:54.864562 2357 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:25:54.864627 kubelet[2357]: I0911 00:25:54.864593 2357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:25:54.869236 kubelet[2357]: I0911 00:25:54.868748 2357 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 00:25:54.869236 kubelet[2357]: W0911 00:25:54.868879 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:54.869236 kubelet[2357]: E0911 00:25:54.868952 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:54.869236 kubelet[2357]: W0911 00:25:54.869126 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:54.869236 kubelet[2357]: E0911 00:25:54.869173 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:54.869519 kubelet[2357]: I0911 00:25:54.869491 2357 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:25:54.870240 kubelet[2357]: W0911 00:25:54.870209 2357 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:25:54.872994 kubelet[2357]: I0911 00:25:54.872967 2357 server.go:1274] "Started kubelet" Sep 11 00:25:54.873379 kubelet[2357]: I0911 00:25:54.873331 2357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:25:54.873842 kubelet[2357]: I0911 00:25:54.873825 2357 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:25:54.874009 kubelet[2357]: I0911 00:25:54.873972 2357 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:25:54.874845 kubelet[2357]: I0911 00:25:54.874822 2357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:25:54.875187 kubelet[2357]: I0911 00:25:54.875170 2357 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:25:54.878711 kubelet[2357]: I0911 00:25:54.878236 2357 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:25:54.880990 kubelet[2357]: I0911 00:25:54.880946 2357 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:25:54.881370 kubelet[2357]: I0911 00:25:54.881348 2357 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:25:54.881479 kubelet[2357]: I0911 00:25:54.881413 2357 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:25:54.881621 kubelet[2357]: E0911 00:25:54.881575 2357 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:25:54.881874 kubelet[2357]: I0911 00:25:54.881829 2357 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:25:54.881923 kubelet[2357]: W0911 00:25:54.881868 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:54.881962 kubelet[2357]: E0911 00:25:54.881922 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:54.881962 kubelet[2357]: I0911 00:25:54.881929 2357 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:25:54.882213 kubelet[2357]: E0911 00:25:54.882166 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:54.882412 kubelet[2357]: E0911 00:25:54.882358 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Sep 11 00:25:54.883447 kubelet[2357]: E0911 00:25:54.881883 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186412bc6110180f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:25:54.872932367 +0000 UTC m=+0.281455732,LastTimestamp:2025-09-11 00:25:54.872932367 +0000 UTC m=+0.281455732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:25:54.883447 kubelet[2357]: I0911 00:25:54.883212 2357 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:25:54.899335 kubelet[2357]: I0911 00:25:54.899294 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:25:54.901401 kubelet[2357]: I0911 00:25:54.901317 2357 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:25:54.901492 kubelet[2357]: I0911 00:25:54.901403 2357 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:25:54.901492 kubelet[2357]: I0911 00:25:54.901379 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:25:54.901492 kubelet[2357]: I0911 00:25:54.901474 2357 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:25:54.901492 kubelet[2357]: I0911 00:25:54.901476 2357 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:25:54.901633 kubelet[2357]: I0911 00:25:54.901504 2357 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:25:54.901633 kubelet[2357]: E0911 00:25:54.901545 2357 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:25:54.903027 kubelet[2357]: W0911 00:25:54.902965 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:54.903101 kubelet[2357]: E0911 00:25:54.903032 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:54.982779 kubelet[2357]: E0911 00:25:54.982694 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.001944 kubelet[2357]: E0911 00:25:55.001880 2357 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:25:55.083408 kubelet[2357]: E0911 00:25:55.083248 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.083757 kubelet[2357]: E0911 00:25:55.083711 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Sep 11 00:25:55.183723 kubelet[2357]: E0911 00:25:55.183646 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.202835 kubelet[2357]: E0911 00:25:55.202797 2357 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:25:55.284448 kubelet[2357]: E0911 00:25:55.284340 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.385686 kubelet[2357]: E0911 00:25:55.385502 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.484554 kubelet[2357]: E0911 00:25:55.484496 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Sep 11 00:25:55.486765 kubelet[2357]: E0911 00:25:55.486465 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.587234 kubelet[2357]: E0911 00:25:55.587161 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.603416 kubelet[2357]: E0911 00:25:55.603355 2357 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:25:55.688155 kubelet[2357]: E0911 00:25:55.688011 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.782284 kubelet[2357]: I0911 00:25:55.782209 2357 policy_none.go:49] "None policy: Start" Sep 11 00:25:55.783014 kubelet[2357]: I0911 00:25:55.782988 2357 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:25:55.783071 kubelet[2357]: I0911 00:25:55.783018 2357 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:25:55.788409 kubelet[2357]: E0911 00:25:55.788383 2357 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:25:55.817441 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:25:55.834755 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:25:55.838020 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:25:55.858552 kubelet[2357]: I0911 00:25:55.858497 2357 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:25:55.858740 kubelet[2357]: I0911 00:25:55.858724 2357 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:25:55.858917 kubelet[2357]: I0911 00:25:55.858742 2357 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:25:55.859008 kubelet[2357]: I0911 00:25:55.858921 2357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:25:55.860571 kubelet[2357]: E0911 00:25:55.860528 2357 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:25:55.927583 kubelet[2357]: W0911 00:25:55.927517 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:55.927583 kubelet[2357]: E0911 00:25:55.927568 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:55.959782 kubelet[2357]: I0911 00:25:55.959674 2357 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:25:55.959917 kubelet[2357]: E0911 00:25:55.959885 2357 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Sep 11 00:25:56.014103 kubelet[2357]: W0911 00:25:56.014052 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:56.014154 kubelet[2357]: E0911 00:25:56.014104 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:56.161313 kubelet[2357]: W0911 00:25:56.161224 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:56.161537 kubelet[2357]: E0911 00:25:56.161317 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:56.161807 kubelet[2357]: I0911 00:25:56.161788 2357 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:25:56.162041 kubelet[2357]: E0911 00:25:56.162005 2357 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Sep 11 00:25:56.285354 kubelet[2357]: E0911 00:25:56.285288 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Sep 11 00:25:56.412113 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 11 00:25:56.423049 kubelet[2357]: W0911 00:25:56.423001 2357 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 11 00:25:56.423111 kubelet[2357]: E0911 00:25:56.423061 2357 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:56.428107 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 11 00:25:56.446072 systemd[1]: Created slice kubepods-burstable-podd55e9986a1548a6172b862efc4ee977f.slice - libcontainer container kubepods-burstable-podd55e9986a1548a6172b862efc4ee977f.slice. Sep 11 00:25:56.491049 kubelet[2357]: I0911 00:25:56.490997 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:25:56.491049 kubelet[2357]: I0911 00:25:56.491042 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:25:56.491204 kubelet[2357]: I0911 00:25:56.491063 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:25:56.491204 kubelet[2357]: I0911 00:25:56.491083 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:25:56.491204 kubelet[2357]: I0911 00:25:56.491105 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d55e9986a1548a6172b862efc4ee977f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55e9986a1548a6172b862efc4ee977f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:25:56.491204 kubelet[2357]: I0911 00:25:56.491150 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d55e9986a1548a6172b862efc4ee977f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55e9986a1548a6172b862efc4ee977f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:25:56.491204 kubelet[2357]: I0911 00:25:56.491174 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d55e9986a1548a6172b862efc4ee977f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d55e9986a1548a6172b862efc4ee977f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:25:56.491324 kubelet[2357]: I0911 00:25:56.491195 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:25:56.491324 kubelet[2357]: I0911 00:25:56.491213 2357 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:25:56.563564 kubelet[2357]: I0911 00:25:56.563446 2357 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:25:56.563866 kubelet[2357]: E0911 00:25:56.563838 2357 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Sep 11 00:25:56.727058 kubelet[2357]: E0911 00:25:56.726995 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:56.727748 containerd[1558]: time="2025-09-11T00:25:56.727716640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:25:56.744104 kubelet[2357]: E0911 00:25:56.744051 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:56.745724 containerd[1558]: time="2025-09-11T00:25:56.745634255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 11 00:25:56.748701 containerd[1558]: time="2025-09-11T00:25:56.748666200Z" level=info msg="connecting to shim e1c5e87aef2190cfb1082b9ef7e9cf2142ad39dc4c76843eeac670854845ba1e" address="unix:///run/containerd/s/26903d333516ecf83d5111b3df2fb140abf71dae3802f5db80ebb44f69f7c2aa" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:25:56.749175 kubelet[2357]: E0911 00:25:56.749129 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:56.749836 containerd[1558]: time="2025-09-11T00:25:56.749729816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d55e9986a1548a6172b862efc4ee977f,Namespace:kube-system,Attempt:0,}" Sep 11 00:25:56.780244 containerd[1558]: time="2025-09-11T00:25:56.780193432Z" level=info msg="connecting to shim 91bcd77c84709bc8d3a108789753d41f1e7c62d8f7b2a67d8ad78971c4061510" address="unix:///run/containerd/s/019f913dc6a0bd0cf93a4bac449af921dc03b5fe475a63c69ad69889709c044b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:25:56.782676 systemd[1]: Started cri-containerd-e1c5e87aef2190cfb1082b9ef7e9cf2142ad39dc4c76843eeac670854845ba1e.scope - libcontainer container e1c5e87aef2190cfb1082b9ef7e9cf2142ad39dc4c76843eeac670854845ba1e. Sep 11 00:25:56.786952 containerd[1558]: time="2025-09-11T00:25:56.786732013Z" level=info msg="connecting to shim 5bd90aa840af87a826748c8e0000eb028c02295089bdec682374671b790950f3" address="unix:///run/containerd/s/62f7eaf4622ba4a44a256307cd8e1a853a9191646169afeeb1325f2ff3a96447" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:25:56.810609 systemd[1]: Started cri-containerd-91bcd77c84709bc8d3a108789753d41f1e7c62d8f7b2a67d8ad78971c4061510.scope - libcontainer container 91bcd77c84709bc8d3a108789753d41f1e7c62d8f7b2a67d8ad78971c4061510. Sep 11 00:25:56.816699 systemd[1]: Started cri-containerd-5bd90aa840af87a826748c8e0000eb028c02295089bdec682374671b790950f3.scope - libcontainer container 5bd90aa840af87a826748c8e0000eb028c02295089bdec682374671b790950f3. Sep 11 00:25:56.876094 kubelet[2357]: E0911 00:25:56.876040 2357 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:25:56.948175 containerd[1558]: time="2025-09-11T00:25:56.948122862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"91bcd77c84709bc8d3a108789753d41f1e7c62d8f7b2a67d8ad78971c4061510\"" Sep 11 00:25:56.949086 kubelet[2357]: E0911 00:25:56.949048 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:56.950785 containerd[1558]: time="2025-09-11T00:25:56.950740679Z" level=info msg="CreateContainer within sandbox \"91bcd77c84709bc8d3a108789753d41f1e7c62d8f7b2a67d8ad78971c4061510\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:25:56.952556 containerd[1558]: time="2025-09-11T00:25:56.952522798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1c5e87aef2190cfb1082b9ef7e9cf2142ad39dc4c76843eeac670854845ba1e\"" Sep 11 00:25:56.953004 kubelet[2357]: E0911 00:25:56.952981 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:56.953746 containerd[1558]: time="2025-09-11T00:25:56.953702057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d55e9986a1548a6172b862efc4ee977f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bd90aa840af87a826748c8e0000eb028c02295089bdec682374671b790950f3\"" Sep 11 00:25:56.954729 containerd[1558]: time="2025-09-11T00:25:56.954355755Z" level=info msg="CreateContainer within sandbox \"e1c5e87aef2190cfb1082b9ef7e9cf2142ad39dc4c76843eeac670854845ba1e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:25:56.954776 kubelet[2357]: E0911 00:25:56.954484 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:56.956265 containerd[1558]: time="2025-09-11T00:25:56.956235151Z" level=info msg="CreateContainer within sandbox \"5bd90aa840af87a826748c8e0000eb028c02295089bdec682374671b790950f3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:25:56.963819 containerd[1558]: time="2025-09-11T00:25:56.963780099Z" level=info msg="Container 2c3745a37e859ac983a2699cf8b7389732a263e02b2424a065561cbd7833575b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:25:56.969198 containerd[1558]: time="2025-09-11T00:25:56.969141615Z" level=info msg="Container ef72aa9552be3971324148d19ea0428e8c10330eb268c9cf81d8a1711d93738c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:25:56.973789 containerd[1558]: time="2025-09-11T00:25:56.973747818Z" level=info msg="CreateContainer within sandbox \"91bcd77c84709bc8d3a108789753d41f1e7c62d8f7b2a67d8ad78971c4061510\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c3745a37e859ac983a2699cf8b7389732a263e02b2424a065561cbd7833575b\"" Sep 11 00:25:56.975795 containerd[1558]: time="2025-09-11T00:25:56.975315044Z" level=info msg="StartContainer for \"2c3745a37e859ac983a2699cf8b7389732a263e02b2424a065561cbd7833575b\"" Sep 11 00:25:56.976315 containerd[1558]: time="2025-09-11T00:25:56.976164508Z" level=info msg="Container 48773dd90838c9e4b021b65653c9e828790f47fb3f070d423eb2ba1b621029d9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:25:56.977447 containerd[1558]: time="2025-09-11T00:25:56.977397120Z" level=info msg="connecting to shim 2c3745a37e859ac983a2699cf8b7389732a263e02b2424a065561cbd7833575b" address="unix:///run/containerd/s/019f913dc6a0bd0cf93a4bac449af921dc03b5fe475a63c69ad69889709c044b" protocol=ttrpc version=3 Sep 11 00:25:56.986109 containerd[1558]: time="2025-09-11T00:25:56.986042505Z" level=info msg="CreateContainer within sandbox \"e1c5e87aef2190cfb1082b9ef7e9cf2142ad39dc4c76843eeac670854845ba1e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef72aa9552be3971324148d19ea0428e8c10330eb268c9cf81d8a1711d93738c\"" Sep 11 00:25:56.986749 containerd[1558]: time="2025-09-11T00:25:56.986707173Z" level=info msg="StartContainer for \"ef72aa9552be3971324148d19ea0428e8c10330eb268c9cf81d8a1711d93738c\"" Sep 11 00:25:56.987780 containerd[1558]: time="2025-09-11T00:25:56.987755170Z" level=info msg="connecting to shim ef72aa9552be3971324148d19ea0428e8c10330eb268c9cf81d8a1711d93738c" address="unix:///run/containerd/s/26903d333516ecf83d5111b3df2fb140abf71dae3802f5db80ebb44f69f7c2aa" protocol=ttrpc version=3 Sep 11 00:25:56.988558 containerd[1558]: time="2025-09-11T00:25:56.988525653Z" level=info msg="CreateContainer within sandbox \"5bd90aa840af87a826748c8e0000eb028c02295089bdec682374671b790950f3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"48773dd90838c9e4b021b65653c9e828790f47fb3f070d423eb2ba1b621029d9\"" Sep 11 00:25:56.990202 containerd[1558]: time="2025-09-11T00:25:56.989287389Z" level=info msg="StartContainer for \"48773dd90838c9e4b021b65653c9e828790f47fb3f070d423eb2ba1b621029d9\"" Sep 11 00:25:56.990202 containerd[1558]: time="2025-09-11T00:25:56.990146651Z" level=info msg="connecting to shim 48773dd90838c9e4b021b65653c9e828790f47fb3f070d423eb2ba1b621029d9" address="unix:///run/containerd/s/62f7eaf4622ba4a44a256307cd8e1a853a9191646169afeeb1325f2ff3a96447" protocol=ttrpc version=3 Sep 11 00:25:57.001626 systemd[1]: Started cri-containerd-2c3745a37e859ac983a2699cf8b7389732a263e02b2424a065561cbd7833575b.scope - libcontainer container 2c3745a37e859ac983a2699cf8b7389732a263e02b2424a065561cbd7833575b. Sep 11 00:25:57.018667 systemd[1]: Started cri-containerd-ef72aa9552be3971324148d19ea0428e8c10330eb268c9cf81d8a1711d93738c.scope - libcontainer container ef72aa9552be3971324148d19ea0428e8c10330eb268c9cf81d8a1711d93738c. Sep 11 00:25:57.026567 systemd[1]: Started cri-containerd-48773dd90838c9e4b021b65653c9e828790f47fb3f070d423eb2ba1b621029d9.scope - libcontainer container 48773dd90838c9e4b021b65653c9e828790f47fb3f070d423eb2ba1b621029d9. Sep 11 00:25:57.084017 containerd[1558]: time="2025-09-11T00:25:57.083735111Z" level=info msg="StartContainer for \"2c3745a37e859ac983a2699cf8b7389732a263e02b2424a065561cbd7833575b\" returns successfully" Sep 11 00:25:57.084779 containerd[1558]: time="2025-09-11T00:25:57.084309054Z" level=info msg="StartContainer for \"ef72aa9552be3971324148d19ea0428e8c10330eb268c9cf81d8a1711d93738c\" returns successfully" Sep 11 00:25:57.092457 kubelet[2357]: E0911 00:25:57.092250 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186412bc6110180f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:25:54.872932367 +0000 UTC m=+0.281455732,LastTimestamp:2025-09-11 00:25:54.872932367 +0000 UTC m=+0.281455732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:25:57.097249 containerd[1558]: time="2025-09-11T00:25:57.097218364Z" level=info msg="StartContainer for \"48773dd90838c9e4b021b65653c9e828790f47fb3f070d423eb2ba1b621029d9\" returns successfully" Sep 11 00:25:57.368273 kubelet[2357]: I0911 00:25:57.368153 2357 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:25:57.913965 kubelet[2357]: E0911 00:25:57.913844 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:57.917886 kubelet[2357]: E0911 00:25:57.917832 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:57.920915 kubelet[2357]: E0911 00:25:57.920887 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:25:58.038336 kubelet[2357]: E0911 00:25:58.038275 2357 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:25:58.121562 kubelet[2357]: I0911 00:25:58.121515 2357 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:25:58.867142 kubelet[2357]: I0911 00:25:58.867097 2357 apiserver.go:52] "Watching apiserver" Sep 11 00:25:58.881812 kubelet[2357]: I0911 00:25:58.881758 2357 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:25:58.926200 kubelet[2357]: E0911 00:25:58.926155 2357 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 00:25:58.926675 kubelet[2357]: E0911 00:25:58.926336 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:01.228412 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-7.scope)... Sep 11 00:26:01.228823 systemd[1]: Reloading... Sep 11 00:26:01.314475 zram_generator::config[2684]: No configuration found. Sep 11 00:26:01.539887 systemd[1]: Reloading finished in 310 ms. Sep 11 00:26:01.574546 kubelet[2357]: I0911 00:26:01.574481 2357 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:26:01.574558 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:26:01.597915 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:26:01.598216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:26:01.598274 systemd[1]: kubelet.service: Consumed 792ms CPU time, 130.8M memory peak. Sep 11 00:26:01.600465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:26:01.802983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:26:01.807133 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:26:01.851852 kubelet[2726]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:26:01.851852 kubelet[2726]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:26:01.851852 kubelet[2726]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:26:01.852370 kubelet[2726]: I0911 00:26:01.851893 2726 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:26:01.860853 kubelet[2726]: I0911 00:26:01.860721 2726 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:26:01.860853 kubelet[2726]: I0911 00:26:01.860754 2726 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:26:01.861054 kubelet[2726]: I0911 00:26:01.861032 2726 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:26:01.862343 kubelet[2726]: I0911 00:26:01.862317 2726 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:26:01.865434 kubelet[2726]: I0911 00:26:01.865264 2726 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:26:01.870663 kubelet[2726]: I0911 00:26:01.870622 2726 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:26:01.877299 kubelet[2726]: I0911 00:26:01.877270 2726 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:26:01.877412 kubelet[2726]: I0911 00:26:01.877380 2726 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:26:01.877551 kubelet[2726]: I0911 00:26:01.877522 2726 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:26:01.877881 kubelet[2726]: I0911 00:26:01.877553 2726 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:26:01.877881 kubelet[2726]: I0911 00:26:01.877795 2726 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:26:01.877881 kubelet[2726]: I0911 00:26:01.877804 2726 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:26:01.877881 kubelet[2726]: I0911 00:26:01.877829 2726 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:26:01.878037 kubelet[2726]: I0911 00:26:01.877935 2726 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:26:01.878037 kubelet[2726]: I0911 00:26:01.877947 2726 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:26:01.878037 kubelet[2726]: I0911 00:26:01.877977 2726 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:26:01.878037 kubelet[2726]: I0911 00:26:01.877987 2726 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:26:01.880030 kubelet[2726]: I0911 00:26:01.879874 2726 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 00:26:01.880828 kubelet[2726]: I0911 00:26:01.880483 2726 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:26:01.881041 kubelet[2726]: I0911 00:26:01.880978 2726 server.go:1274] "Started kubelet" Sep 11 00:26:01.882252 kubelet[2726]: I0911 00:26:01.881696 2726 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:26:01.882252 kubelet[2726]: I0911 00:26:01.882092 2726 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:26:01.883641 kubelet[2726]: I0911 00:26:01.882960 2726 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:26:01.884870 kubelet[2726]: I0911 00:26:01.884564 2726 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:26:01.886301 kubelet[2726]: I0911 00:26:01.885234 2726 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:26:01.886829 kubelet[2726]: I0911 00:26:01.886802 2726 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:26:01.886924 kubelet[2726]: I0911 00:26:01.886899 2726 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:26:01.887983 kubelet[2726]: I0911 00:26:01.887013 2726 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:26:01.887983 kubelet[2726]: E0911 00:26:01.887440 2726 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:26:01.887983 kubelet[2726]: I0911 00:26:01.887756 2726 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:26:01.894923 kubelet[2726]: I0911 00:26:01.893573 2726 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:26:01.894923 kubelet[2726]: I0911 00:26:01.893663 2726 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:26:01.895089 kubelet[2726]: I0911 00:26:01.895047 2726 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:26:01.899125 kubelet[2726]: E0911 00:26:01.897910 2726 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:26:01.913465 kubelet[2726]: I0911 00:26:01.913370 2726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:26:01.914918 kubelet[2726]: I0911 00:26:01.914891 2726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:26:01.914918 kubelet[2726]: I0911 00:26:01.914915 2726 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:26:01.915000 kubelet[2726]: I0911 00:26:01.914935 2726 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:26:01.915000 kubelet[2726]: E0911 00:26:01.914978 2726 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:26:01.937195 kubelet[2726]: I0911 00:26:01.937158 2726 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:26:01.937195 kubelet[2726]: I0911 00:26:01.937182 2726 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:26:01.937195 kubelet[2726]: I0911 00:26:01.937203 2726 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:26:01.937365 kubelet[2726]: I0911 00:26:01.937353 2726 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:26:01.937389 kubelet[2726]: I0911 00:26:01.937363 2726 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:26:01.937389 kubelet[2726]: I0911 00:26:01.937382 2726 policy_none.go:49] "None policy: Start" Sep 11 00:26:01.937915 kubelet[2726]: I0911 00:26:01.937898 2726 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:26:01.937915 kubelet[2726]: I0911 00:26:01.937917 2726 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:26:01.938064 kubelet[2726]: I0911 00:26:01.938052 2726 state_mem.go:75] "Updated machine memory state" Sep 11 00:26:01.943038 kubelet[2726]: I0911 00:26:01.943012 2726 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:26:01.943191 kubelet[2726]: I0911 00:26:01.943174 2726 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:26:01.943232 kubelet[2726]: I0911 00:26:01.943190 2726 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:26:01.943742 kubelet[2726]: I0911 00:26:01.943704 2726 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:26:02.048764 kubelet[2726]: I0911 00:26:02.047895 2726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:26:02.066827 kubelet[2726]: I0911 00:26:02.066692 2726 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 11 00:26:02.066827 kubelet[2726]: I0911 00:26:02.066789 2726 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:26:02.188906 kubelet[2726]: I0911 00:26:02.188832 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d55e9986a1548a6172b862efc4ee977f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55e9986a1548a6172b862efc4ee977f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:26:02.188906 kubelet[2726]: I0911 00:26:02.188889 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:26:02.188906 kubelet[2726]: I0911 00:26:02.188917 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:26:02.228348 kubelet[2726]: I0911 00:26:02.188940 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:26:02.228348 kubelet[2726]: I0911 00:26:02.188963 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:26:02.228348 kubelet[2726]: I0911 00:26:02.188983 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d55e9986a1548a6172b862efc4ee977f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55e9986a1548a6172b862efc4ee977f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:26:02.228348 kubelet[2726]: I0911 00:26:02.189006 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d55e9986a1548a6172b862efc4ee977f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d55e9986a1548a6172b862efc4ee977f\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:26:02.228348 kubelet[2726]: I0911 00:26:02.189043 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:26:02.228532 kubelet[2726]: I0911 00:26:02.189065 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:26:02.321827 kubelet[2726]: E0911 00:26:02.321684 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:02.321827 kubelet[2726]: E0911 00:26:02.321689 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:02.324060 kubelet[2726]: E0911 00:26:02.324007 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:02.884167 kubelet[2726]: I0911 00:26:02.884112 2726 apiserver.go:52] "Watching apiserver" Sep 11 00:26:02.887475 kubelet[2726]: I0911 00:26:02.887408 2726 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:26:02.934357 kubelet[2726]: E0911 00:26:02.934319 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:03.324459 kubelet[2726]: E0911 00:26:03.324363 2726 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:26:03.324916 kubelet[2726]: E0911 00:26:03.324625 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:03.325485 kubelet[2726]: I0911 00:26:03.325216 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.325180352 podStartE2EDuration="1.325180352s" podCreationTimestamp="2025-09-11 00:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:26:03.325052829 +0000 UTC m=+1.514099480" watchObservedRunningTime="2025-09-11 00:26:03.325180352 +0000 UTC m=+1.514226993" Sep 11 00:26:03.325485 kubelet[2726]: E0911 00:26:03.325233 2726 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:26:03.325485 kubelet[2726]: E0911 00:26:03.325460 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:03.658213 kubelet[2726]: I0911 00:26:03.657994 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.657822115 podStartE2EDuration="1.657822115s" podCreationTimestamp="2025-09-11 00:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:26:03.657698119 +0000 UTC m=+1.846744770" watchObservedRunningTime="2025-09-11 00:26:03.657822115 +0000 UTC m=+1.846868786" Sep 11 00:26:03.936282 kubelet[2726]: E0911 00:26:03.936125 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:03.936282 kubelet[2726]: E0911 00:26:03.936229 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:04.938165 kubelet[2726]: E0911 00:26:04.938118 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:05.730650 kubelet[2726]: I0911 00:26:05.730606 2726 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:26:05.731135 containerd[1558]: time="2025-09-11T00:26:05.730947866Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:26:05.731875 kubelet[2726]: I0911 00:26:05.731466 2726 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:26:06.394932 kubelet[2726]: I0911 00:26:06.394853 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.394824905 podStartE2EDuration="4.394824905s" podCreationTimestamp="2025-09-11 00:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:26:03.797107047 +0000 UTC m=+1.986153728" watchObservedRunningTime="2025-09-11 00:26:06.394824905 +0000 UTC m=+4.583871556" Sep 11 00:26:06.409733 systemd[1]: Created slice kubepods-besteffort-pod22a13fe9_42b6_48b4_93fd_134a8b886e1f.slice - libcontainer container kubepods-besteffort-pod22a13fe9_42b6_48b4_93fd_134a8b886e1f.slice. Sep 11 00:26:06.417649 kubelet[2726]: I0911 00:26:06.417590 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22a13fe9-42b6-48b4-93fd-134a8b886e1f-kube-proxy\") pod \"kube-proxy-9l9hv\" (UID: \"22a13fe9-42b6-48b4-93fd-134a8b886e1f\") " pod="kube-system/kube-proxy-9l9hv" Sep 11 00:26:06.417649 kubelet[2726]: I0911 00:26:06.417652 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22a13fe9-42b6-48b4-93fd-134a8b886e1f-lib-modules\") pod \"kube-proxy-9l9hv\" (UID: \"22a13fe9-42b6-48b4-93fd-134a8b886e1f\") " pod="kube-system/kube-proxy-9l9hv" Sep 11 00:26:06.417856 kubelet[2726]: I0911 00:26:06.417687 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd855\" (UniqueName: \"kubernetes.io/projected/22a13fe9-42b6-48b4-93fd-134a8b886e1f-kube-api-access-dd855\") pod \"kube-proxy-9l9hv\" (UID: \"22a13fe9-42b6-48b4-93fd-134a8b886e1f\") " pod="kube-system/kube-proxy-9l9hv" Sep 11 00:26:06.417856 kubelet[2726]: I0911 00:26:06.417710 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22a13fe9-42b6-48b4-93fd-134a8b886e1f-xtables-lock\") pod \"kube-proxy-9l9hv\" (UID: \"22a13fe9-42b6-48b4-93fd-134a8b886e1f\") " pod="kube-system/kube-proxy-9l9hv" Sep 11 00:26:06.722234 kubelet[2726]: E0911 00:26:06.722062 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:06.722932 containerd[1558]: time="2025-09-11T00:26:06.722889312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9l9hv,Uid:22a13fe9-42b6-48b4-93fd-134a8b886e1f,Namespace:kube-system,Attempt:0,}" Sep 11 00:26:06.752835 containerd[1558]: time="2025-09-11T00:26:06.752771382Z" level=info msg="connecting to shim 924a6944f324a1302887845888364df30725f08dc4c29ae80dcdac4de41abad2" address="unix:///run/containerd/s/a8158427e8a48e359f812c17daf430681c8164da581305ce237551db55240182" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:06.785976 systemd[1]: Started cri-containerd-924a6944f324a1302887845888364df30725f08dc4c29ae80dcdac4de41abad2.scope - libcontainer container 924a6944f324a1302887845888364df30725f08dc4c29ae80dcdac4de41abad2. Sep 11 00:26:06.820519 kubelet[2726]: I0911 00:26:06.820090 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vhld\" (UniqueName: \"kubernetes.io/projected/39c8f699-aba9-4892-9d60-285ea5678c1e-kube-api-access-4vhld\") pod \"tigera-operator-58fc44c59b-222v9\" (UID: \"39c8f699-aba9-4892-9d60-285ea5678c1e\") " pod="tigera-operator/tigera-operator-58fc44c59b-222v9" Sep 11 00:26:06.820519 kubelet[2726]: I0911 00:26:06.820132 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39c8f699-aba9-4892-9d60-285ea5678c1e-var-lib-calico\") pod \"tigera-operator-58fc44c59b-222v9\" (UID: \"39c8f699-aba9-4892-9d60-285ea5678c1e\") " pod="tigera-operator/tigera-operator-58fc44c59b-222v9" Sep 11 00:26:06.820863 systemd[1]: Created slice kubepods-besteffort-pod39c8f699_aba9_4892_9d60_285ea5678c1e.slice - libcontainer container kubepods-besteffort-pod39c8f699_aba9_4892_9d60_285ea5678c1e.slice. Sep 11 00:26:06.837806 containerd[1558]: time="2025-09-11T00:26:06.837744936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9l9hv,Uid:22a13fe9-42b6-48b4-93fd-134a8b886e1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"924a6944f324a1302887845888364df30725f08dc4c29ae80dcdac4de41abad2\"" Sep 11 00:26:06.839083 kubelet[2726]: E0911 00:26:06.839040 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:06.841680 containerd[1558]: time="2025-09-11T00:26:06.841618249Z" level=info msg="CreateContainer within sandbox \"924a6944f324a1302887845888364df30725f08dc4c29ae80dcdac4de41abad2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:26:06.856698 containerd[1558]: time="2025-09-11T00:26:06.855085732Z" level=info msg="Container bb0231752361e2cb801d1a2f27df2eea01b249035f71cba400a256358101e561: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:06.866134 containerd[1558]: time="2025-09-11T00:26:06.866077239Z" level=info msg="CreateContainer within sandbox \"924a6944f324a1302887845888364df30725f08dc4c29ae80dcdac4de41abad2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bb0231752361e2cb801d1a2f27df2eea01b249035f71cba400a256358101e561\"" Sep 11 00:26:06.866685 containerd[1558]: time="2025-09-11T00:26:06.866651069Z" level=info msg="StartContainer for \"bb0231752361e2cb801d1a2f27df2eea01b249035f71cba400a256358101e561\"" Sep 11 00:26:06.868184 containerd[1558]: time="2025-09-11T00:26:06.868157784Z" level=info msg="connecting to shim bb0231752361e2cb801d1a2f27df2eea01b249035f71cba400a256358101e561" address="unix:///run/containerd/s/a8158427e8a48e359f812c17daf430681c8164da581305ce237551db55240182" protocol=ttrpc version=3 Sep 11 00:26:06.889570 systemd[1]: Started cri-containerd-bb0231752361e2cb801d1a2f27df2eea01b249035f71cba400a256358101e561.scope - libcontainer container bb0231752361e2cb801d1a2f27df2eea01b249035f71cba400a256358101e561. Sep 11 00:26:06.932953 containerd[1558]: time="2025-09-11T00:26:06.932851834Z" level=info msg="StartContainer for \"bb0231752361e2cb801d1a2f27df2eea01b249035f71cba400a256358101e561\" returns successfully" Sep 11 00:26:06.943140 kubelet[2726]: E0911 00:26:06.942605 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:06.952722 kubelet[2726]: I0911 00:26:06.952657 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9l9hv" podStartSLOduration=0.95263836 podStartE2EDuration="952.63836ms" podCreationTimestamp="2025-09-11 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:26:06.952578327 +0000 UTC m=+5.141624978" watchObservedRunningTime="2025-09-11 00:26:06.95263836 +0000 UTC m=+5.141685011" Sep 11 00:26:07.097961 kubelet[2726]: E0911 00:26:07.097922 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:07.124763 containerd[1558]: time="2025-09-11T00:26:07.124708408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-222v9,Uid:39c8f699-aba9-4892-9d60-285ea5678c1e,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:26:07.206261 containerd[1558]: time="2025-09-11T00:26:07.206030611Z" level=info msg="connecting to shim 897e0e7458114cf6ce1025187e085e279ebf289d56a0806879fa44ba12699742" address="unix:///run/containerd/s/db2238843d5b90d04c9a3b3d418b5b9626ab0a68eda07a6e7af1b7e032a4612d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:07.230575 systemd[1]: Started cri-containerd-897e0e7458114cf6ce1025187e085e279ebf289d56a0806879fa44ba12699742.scope - libcontainer container 897e0e7458114cf6ce1025187e085e279ebf289d56a0806879fa44ba12699742. Sep 11 00:26:07.275963 containerd[1558]: time="2025-09-11T00:26:07.275913636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-222v9,Uid:39c8f699-aba9-4892-9d60-285ea5678c1e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"897e0e7458114cf6ce1025187e085e279ebf289d56a0806879fa44ba12699742\"" Sep 11 00:26:07.278153 containerd[1558]: time="2025-09-11T00:26:07.278111370Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:26:07.946525 kubelet[2726]: E0911 00:26:07.946483 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:08.594972 update_engine[1536]: I20250911 00:26:08.594847 1536 update_attempter.cc:509] Updating boot flags... Sep 11 00:26:09.721280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446814975.mount: Deactivated successfully. Sep 11 00:26:10.076796 containerd[1558]: time="2025-09-11T00:26:10.076727055Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:10.077485 containerd[1558]: time="2025-09-11T00:26:10.077420559Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:26:10.078679 containerd[1558]: time="2025-09-11T00:26:10.078634560Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:10.080885 containerd[1558]: time="2025-09-11T00:26:10.080843086Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:10.081479 containerd[1558]: time="2025-09-11T00:26:10.081408337Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.803248665s" Sep 11 00:26:10.081479 containerd[1558]: time="2025-09-11T00:26:10.081475114Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:26:10.083436 containerd[1558]: time="2025-09-11T00:26:10.083392378Z" level=info msg="CreateContainer within sandbox \"897e0e7458114cf6ce1025187e085e279ebf289d56a0806879fa44ba12699742\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:26:10.090205 containerd[1558]: time="2025-09-11T00:26:10.090155807Z" level=info msg="Container 128f55c1de241e350666793f8e48ca96e182b662c1e232e7885d7b3c100f7b19: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:10.098140 containerd[1558]: time="2025-09-11T00:26:10.098083763Z" level=info msg="CreateContainer within sandbox \"897e0e7458114cf6ce1025187e085e279ebf289d56a0806879fa44ba12699742\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"128f55c1de241e350666793f8e48ca96e182b662c1e232e7885d7b3c100f7b19\"" Sep 11 00:26:10.098760 containerd[1558]: time="2025-09-11T00:26:10.098728395Z" level=info msg="StartContainer for \"128f55c1de241e350666793f8e48ca96e182b662c1e232e7885d7b3c100f7b19\"" Sep 11 00:26:10.099583 containerd[1558]: time="2025-09-11T00:26:10.099558428Z" level=info msg="connecting to shim 128f55c1de241e350666793f8e48ca96e182b662c1e232e7885d7b3c100f7b19" address="unix:///run/containerd/s/db2238843d5b90d04c9a3b3d418b5b9626ab0a68eda07a6e7af1b7e032a4612d" protocol=ttrpc version=3 Sep 11 00:26:10.155615 systemd[1]: Started cri-containerd-128f55c1de241e350666793f8e48ca96e182b662c1e232e7885d7b3c100f7b19.scope - libcontainer container 128f55c1de241e350666793f8e48ca96e182b662c1e232e7885d7b3c100f7b19. Sep 11 00:26:10.187394 containerd[1558]: time="2025-09-11T00:26:10.187339091Z" level=info msg="StartContainer for \"128f55c1de241e350666793f8e48ca96e182b662c1e232e7885d7b3c100f7b19\" returns successfully" Sep 11 00:26:12.462572 kubelet[2726]: E0911 00:26:12.462517 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:12.477442 kubelet[2726]: I0911 00:26:12.477325 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-222v9" podStartSLOduration=3.672384684 podStartE2EDuration="6.477296164s" podCreationTimestamp="2025-09-11 00:26:06 +0000 UTC" firstStartedPulling="2025-09-11 00:26:07.277259873 +0000 UTC m=+5.466306524" lastFinishedPulling="2025-09-11 00:26:10.082171353 +0000 UTC m=+8.271218004" observedRunningTime="2025-09-11 00:26:10.96358692 +0000 UTC m=+9.152633571" watchObservedRunningTime="2025-09-11 00:26:12.477296164 +0000 UTC m=+10.666342815" Sep 11 00:26:14.031363 kubelet[2726]: E0911 00:26:14.031305 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:15.940065 sudo[1777]: pam_unix(sudo:session): session closed for user root Sep 11 00:26:15.944907 sshd[1776]: Connection closed by 10.0.0.1 port 46404 Sep 11 00:26:15.947903 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:15.954126 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:46404.service: Deactivated successfully. Sep 11 00:26:15.960300 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:26:15.961094 systemd[1]: session-7.scope: Consumed 4.488s CPU time, 222.2M memory peak. Sep 11 00:26:15.966656 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:26:15.969735 systemd-logind[1535]: Removed session 7. Sep 11 00:26:18.598744 systemd[1]: Created slice kubepods-besteffort-pod5e6d95a1_d651_4770_bc16_afb8e294c82d.slice - libcontainer container kubepods-besteffort-pod5e6d95a1_d651_4770_bc16_afb8e294c82d.slice. Sep 11 00:26:18.689685 kubelet[2726]: I0911 00:26:18.689578 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e6d95a1-d651-4770-bc16-afb8e294c82d-tigera-ca-bundle\") pod \"calico-typha-66f9ddd4c6-2cr2l\" (UID: \"5e6d95a1-d651-4770-bc16-afb8e294c82d\") " pod="calico-system/calico-typha-66f9ddd4c6-2cr2l" Sep 11 00:26:18.689685 kubelet[2726]: I0911 00:26:18.689671 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5e6d95a1-d651-4770-bc16-afb8e294c82d-typha-certs\") pod \"calico-typha-66f9ddd4c6-2cr2l\" (UID: \"5e6d95a1-d651-4770-bc16-afb8e294c82d\") " pod="calico-system/calico-typha-66f9ddd4c6-2cr2l" Sep 11 00:26:18.689685 kubelet[2726]: I0911 00:26:18.689695 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshz7\" (UniqueName: \"kubernetes.io/projected/5e6d95a1-d651-4770-bc16-afb8e294c82d-kube-api-access-vshz7\") pod \"calico-typha-66f9ddd4c6-2cr2l\" (UID: \"5e6d95a1-d651-4770-bc16-afb8e294c82d\") " pod="calico-system/calico-typha-66f9ddd4c6-2cr2l" Sep 11 00:26:18.906722 kubelet[2726]: E0911 00:26:18.906597 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:18.907088 containerd[1558]: time="2025-09-11T00:26:18.907038258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66f9ddd4c6-2cr2l,Uid:5e6d95a1-d651-4770-bc16-afb8e294c82d,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:19.204037 systemd[1]: Created slice kubepods-besteffort-podd7168250_66d8_4de8_9e42_e712e1d18191.slice - libcontainer container kubepods-besteffort-podd7168250_66d8_4de8_9e42_e712e1d18191.slice. Sep 11 00:26:19.228980 containerd[1558]: time="2025-09-11T00:26:19.228903235Z" level=info msg="connecting to shim 383122ea5d212079b558d6b401829bc34b28cee63324725abe32383ab88179bd" address="unix:///run/containerd/s/4891c376b1fb1d418e65935a66e3c8a2a625c7ead6942e76797976eae95df14d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:19.258682 systemd[1]: Started cri-containerd-383122ea5d212079b558d6b401829bc34b28cee63324725abe32383ab88179bd.scope - libcontainer container 383122ea5d212079b558d6b401829bc34b28cee63324725abe32383ab88179bd. Sep 11 00:26:19.292887 kubelet[2726]: I0911 00:26:19.292832 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-cni-net-dir\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.292887 kubelet[2726]: I0911 00:26:19.292880 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-flexvol-driver-host\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.292887 kubelet[2726]: I0911 00:26:19.292900 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-policysync\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293193 kubelet[2726]: I0911 00:26:19.292914 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-var-lib-calico\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293193 kubelet[2726]: I0911 00:26:19.292945 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-cni-log-dir\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293193 kubelet[2726]: I0911 00:26:19.292961 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7168250-66d8-4de8-9e42-e712e1d18191-tigera-ca-bundle\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293193 kubelet[2726]: I0911 00:26:19.292974 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-xtables-lock\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293193 kubelet[2726]: I0911 00:26:19.292989 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-var-run-calico\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293319 kubelet[2726]: I0911 00:26:19.293007 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-lib-modules\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293319 kubelet[2726]: I0911 00:26:19.293022 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d7168250-66d8-4de8-9e42-e712e1d18191-node-certs\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293319 kubelet[2726]: I0911 00:26:19.293037 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d7168250-66d8-4de8-9e42-e712e1d18191-cni-bin-dir\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.293319 kubelet[2726]: I0911 00:26:19.293064 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtnz\" (UniqueName: \"kubernetes.io/projected/d7168250-66d8-4de8-9e42-e712e1d18191-kube-api-access-6dtnz\") pod \"calico-node-cdxqm\" (UID: \"d7168250-66d8-4de8-9e42-e712e1d18191\") " pod="calico-system/calico-node-cdxqm" Sep 11 00:26:19.301365 kubelet[2726]: E0911 00:26:19.301311 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fj6sx" podUID="65b0dc7b-51bf-4c45-8124-dc6f83a69633" Sep 11 00:26:19.320129 containerd[1558]: time="2025-09-11T00:26:19.320033631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66f9ddd4c6-2cr2l,Uid:5e6d95a1-d651-4770-bc16-afb8e294c82d,Namespace:calico-system,Attempt:0,} returns sandbox id \"383122ea5d212079b558d6b401829bc34b28cee63324725abe32383ab88179bd\"" Sep 11 00:26:19.322164 kubelet[2726]: E0911 00:26:19.322132 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:19.324964 containerd[1558]: time="2025-09-11T00:26:19.324881217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:26:19.393848 kubelet[2726]: I0911 00:26:19.393797 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/65b0dc7b-51bf-4c45-8124-dc6f83a69633-socket-dir\") pod \"csi-node-driver-fj6sx\" (UID: \"65b0dc7b-51bf-4c45-8124-dc6f83a69633\") " pod="calico-system/csi-node-driver-fj6sx" Sep 11 00:26:19.393848 kubelet[2726]: I0911 00:26:19.393862 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65b0dc7b-51bf-4c45-8124-dc6f83a69633-kubelet-dir\") pod \"csi-node-driver-fj6sx\" (UID: \"65b0dc7b-51bf-4c45-8124-dc6f83a69633\") " pod="calico-system/csi-node-driver-fj6sx" Sep 11 00:26:19.394065 kubelet[2726]: I0911 00:26:19.393924 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/65b0dc7b-51bf-4c45-8124-dc6f83a69633-varrun\") pod \"csi-node-driver-fj6sx\" (UID: \"65b0dc7b-51bf-4c45-8124-dc6f83a69633\") " pod="calico-system/csi-node-driver-fj6sx" Sep 11 00:26:19.394065 kubelet[2726]: I0911 00:26:19.394041 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfd75\" (UniqueName: \"kubernetes.io/projected/65b0dc7b-51bf-4c45-8124-dc6f83a69633-kube-api-access-xfd75\") pod \"csi-node-driver-fj6sx\" (UID: \"65b0dc7b-51bf-4c45-8124-dc6f83a69633\") " pod="calico-system/csi-node-driver-fj6sx" Sep 11 00:26:19.395065 kubelet[2726]: E0911 00:26:19.395044 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.395119 kubelet[2726]: W0911 00:26:19.395064 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.395119 kubelet[2726]: E0911 00:26:19.395095 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.395362 kubelet[2726]: E0911 00:26:19.395348 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.395412 kubelet[2726]: W0911 00:26:19.395362 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.395412 kubelet[2726]: E0911 00:26:19.395375 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.395618 kubelet[2726]: E0911 00:26:19.395602 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.395655 kubelet[2726]: W0911 00:26:19.395638 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.395680 kubelet[2726]: E0911 00:26:19.395654 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.395959 kubelet[2726]: E0911 00:26:19.395938 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.395995 kubelet[2726]: W0911 00:26:19.395956 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.395995 kubelet[2726]: E0911 00:26:19.395979 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.396811 kubelet[2726]: E0911 00:26:19.396725 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.396811 kubelet[2726]: W0911 00:26:19.396741 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.396811 kubelet[2726]: E0911 00:26:19.396762 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.396811 kubelet[2726]: I0911 00:26:19.396784 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/65b0dc7b-51bf-4c45-8124-dc6f83a69633-registration-dir\") pod \"csi-node-driver-fj6sx\" (UID: \"65b0dc7b-51bf-4c45-8124-dc6f83a69633\") " pod="calico-system/csi-node-driver-fj6sx" Sep 11 00:26:19.396977 kubelet[2726]: E0911 00:26:19.396953 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.396977 kubelet[2726]: W0911 00:26:19.396969 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.397055 kubelet[2726]: E0911 00:26:19.396988 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.397215 kubelet[2726]: E0911 00:26:19.397198 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.397272 kubelet[2726]: W0911 00:26:19.397218 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.397272 kubelet[2726]: E0911 00:26:19.397254 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.397631 kubelet[2726]: E0911 00:26:19.397496 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.397631 kubelet[2726]: W0911 00:26:19.397521 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.397631 kubelet[2726]: E0911 00:26:19.397574 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.397845 kubelet[2726]: E0911 00:26:19.397819 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.397845 kubelet[2726]: W0911 00:26:19.397832 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.397996 kubelet[2726]: E0911 00:26:19.397920 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.398059 kubelet[2726]: E0911 00:26:19.398042 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.398059 kubelet[2726]: W0911 00:26:19.398055 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.398163 kubelet[2726]: E0911 00:26:19.398145 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.398262 kubelet[2726]: E0911 00:26:19.398247 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.398262 kubelet[2726]: W0911 00:26:19.398258 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.398388 kubelet[2726]: E0911 00:26:19.398369 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.398459 kubelet[2726]: E0911 00:26:19.398415 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.398459 kubelet[2726]: W0911 00:26:19.398436 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.398459 kubelet[2726]: E0911 00:26:19.398456 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.398697 kubelet[2726]: E0911 00:26:19.398667 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.398763 kubelet[2726]: W0911 00:26:19.398680 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.398763 kubelet[2726]: E0911 00:26:19.398723 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.399058 kubelet[2726]: E0911 00:26:19.399038 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.399058 kubelet[2726]: W0911 00:26:19.399054 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.399140 kubelet[2726]: E0911 00:26:19.399071 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.399224 kubelet[2726]: E0911 00:26:19.399208 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.399224 kubelet[2726]: W0911 00:26:19.399218 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.399388 kubelet[2726]: E0911 00:26:19.399371 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.399388 kubelet[2726]: W0911 00:26:19.399383 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.399491 kubelet[2726]: E0911 00:26:19.399391 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.399563 kubelet[2726]: E0911 00:26:19.399523 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.399627 kubelet[2726]: E0911 00:26:19.399618 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.399627 kubelet[2726]: W0911 00:26:19.399626 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.399675 kubelet[2726]: E0911 00:26:19.399650 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.399895 kubelet[2726]: E0911 00:26:19.399879 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.399895 kubelet[2726]: W0911 00:26:19.399891 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.400080 kubelet[2726]: E0911 00:26:19.399901 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.400213 kubelet[2726]: E0911 00:26:19.400197 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.400213 kubelet[2726]: W0911 00:26:19.400209 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.400300 kubelet[2726]: E0911 00:26:19.400242 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.401280 kubelet[2726]: E0911 00:26:19.401246 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.401280 kubelet[2726]: W0911 00:26:19.401260 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.401280 kubelet[2726]: E0911 00:26:19.401272 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.402001 kubelet[2726]: E0911 00:26:19.401983 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.402001 kubelet[2726]: W0911 00:26:19.401996 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.402075 kubelet[2726]: E0911 00:26:19.402012 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.402233 kubelet[2726]: E0911 00:26:19.402215 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.402233 kubelet[2726]: W0911 00:26:19.402227 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.402311 kubelet[2726]: E0911 00:26:19.402278 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.402750 kubelet[2726]: E0911 00:26:19.402720 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.402750 kubelet[2726]: W0911 00:26:19.402735 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.402833 kubelet[2726]: E0911 00:26:19.402804 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.403024 kubelet[2726]: E0911 00:26:19.402922 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.403024 kubelet[2726]: W0911 00:26:19.402936 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.403024 kubelet[2726]: E0911 00:26:19.402994 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.404636 kubelet[2726]: E0911 00:26:19.404509 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.404636 kubelet[2726]: W0911 00:26:19.404525 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.404636 kubelet[2726]: E0911 00:26:19.404546 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.404801 kubelet[2726]: E0911 00:26:19.404785 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.404801 kubelet[2726]: W0911 00:26:19.404799 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.404875 kubelet[2726]: E0911 00:26:19.404813 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.405065 kubelet[2726]: E0911 00:26:19.405052 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.405065 kubelet[2726]: W0911 00:26:19.405062 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.405129 kubelet[2726]: E0911 00:26:19.405071 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.411055 kubelet[2726]: E0911 00:26:19.411030 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.411055 kubelet[2726]: W0911 00:26:19.411051 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.411129 kubelet[2726]: E0911 00:26:19.411070 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.498553 kubelet[2726]: E0911 00:26:19.498506 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.498553 kubelet[2726]: W0911 00:26:19.498539 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.498553 kubelet[2726]: E0911 00:26:19.498564 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.498923 kubelet[2726]: E0911 00:26:19.498872 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.498923 kubelet[2726]: W0911 00:26:19.498910 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.498986 kubelet[2726]: E0911 00:26:19.498950 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.499163 kubelet[2726]: E0911 00:26:19.499144 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.499163 kubelet[2726]: W0911 00:26:19.499158 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.499217 kubelet[2726]: E0911 00:26:19.499173 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.499359 kubelet[2726]: E0911 00:26:19.499342 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.499359 kubelet[2726]: W0911 00:26:19.499353 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.499416 kubelet[2726]: E0911 00:26:19.499366 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.499634 kubelet[2726]: E0911 00:26:19.499610 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.499634 kubelet[2726]: W0911 00:26:19.499622 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.499634 kubelet[2726]: E0911 00:26:19.499635 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.499884 kubelet[2726]: E0911 00:26:19.499866 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.499884 kubelet[2726]: W0911 00:26:19.499877 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.499950 kubelet[2726]: E0911 00:26:19.499891 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.501109 kubelet[2726]: E0911 00:26:19.500315 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.501109 kubelet[2726]: W0911 00:26:19.501055 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.501190 kubelet[2726]: E0911 00:26:19.501110 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.502208 kubelet[2726]: E0911 00:26:19.502136 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.502208 kubelet[2726]: W0911 00:26:19.502153 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.502321 kubelet[2726]: E0911 00:26:19.502254 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.502601 kubelet[2726]: E0911 00:26:19.502563 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.502601 kubelet[2726]: W0911 00:26:19.502578 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.502709 kubelet[2726]: E0911 00:26:19.502675 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.503059 kubelet[2726]: E0911 00:26:19.502918 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.503059 kubelet[2726]: W0911 00:26:19.502936 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.503059 kubelet[2726]: E0911 00:26:19.502971 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.503341 kubelet[2726]: E0911 00:26:19.503312 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.503341 kubelet[2726]: W0911 00:26:19.503330 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.503446 kubelet[2726]: E0911 00:26:19.503347 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.503621 kubelet[2726]: E0911 00:26:19.503564 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.503621 kubelet[2726]: W0911 00:26:19.503608 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.503726 kubelet[2726]: E0911 00:26:19.503660 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.503889 kubelet[2726]: E0911 00:26:19.503860 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.503889 kubelet[2726]: W0911 00:26:19.503873 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.504009 kubelet[2726]: E0911 00:26:19.503913 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.504158 kubelet[2726]: E0911 00:26:19.504140 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.504158 kubelet[2726]: W0911 00:26:19.504152 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.504323 kubelet[2726]: E0911 00:26:19.504248 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.504401 kubelet[2726]: E0911 00:26:19.504371 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.504401 kubelet[2726]: W0911 00:26:19.504384 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.504475 kubelet[2726]: E0911 00:26:19.504465 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.504717 kubelet[2726]: E0911 00:26:19.504698 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.504717 kubelet[2726]: W0911 00:26:19.504713 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.504803 kubelet[2726]: E0911 00:26:19.504737 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.505014 kubelet[2726]: E0911 00:26:19.504986 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.505014 kubelet[2726]: W0911 00:26:19.504999 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.505014 kubelet[2726]: E0911 00:26:19.505017 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.505260 kubelet[2726]: E0911 00:26:19.505242 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.505260 kubelet[2726]: W0911 00:26:19.505255 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.505322 kubelet[2726]: E0911 00:26:19.505276 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.505564 kubelet[2726]: E0911 00:26:19.505544 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.505564 kubelet[2726]: W0911 00:26:19.505556 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.505688 kubelet[2726]: E0911 00:26:19.505669 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.505929 kubelet[2726]: E0911 00:26:19.505913 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.505929 kubelet[2726]: W0911 00:26:19.505925 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.506123 kubelet[2726]: E0911 00:26:19.506037 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.506280 kubelet[2726]: E0911 00:26:19.506248 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.506280 kubelet[2726]: W0911 00:26:19.506269 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.506372 kubelet[2726]: E0911 00:26:19.506281 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.506548 kubelet[2726]: E0911 00:26:19.506521 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.506621 kubelet[2726]: W0911 00:26:19.506560 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.506621 kubelet[2726]: E0911 00:26:19.506576 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.507171 kubelet[2726]: E0911 00:26:19.506984 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.507171 kubelet[2726]: W0911 00:26:19.507014 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.507171 kubelet[2726]: E0911 00:26:19.507048 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.507476 kubelet[2726]: E0911 00:26:19.507448 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.507578 kubelet[2726]: W0911 00:26:19.507563 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.507647 kubelet[2726]: E0911 00:26:19.507635 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.507978 kubelet[2726]: E0911 00:26:19.507955 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.507978 kubelet[2726]: W0911 00:26:19.507970 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.508114 kubelet[2726]: E0911 00:26:19.507982 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.512271 containerd[1558]: time="2025-09-11T00:26:19.512196142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdxqm,Uid:d7168250-66d8-4de8-9e42-e712e1d18191,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:19.522117 kubelet[2726]: E0911 00:26:19.522055 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:19.522314 kubelet[2726]: W0911 00:26:19.522252 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:19.522314 kubelet[2726]: E0911 00:26:19.522273 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:19.545381 containerd[1558]: time="2025-09-11T00:26:19.545278517Z" level=info msg="connecting to shim ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1" address="unix:///run/containerd/s/dc20afab4a3c6efdc1ad2ecd0a617858cc72efcfb110e86e09df95d411cab4a7" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:19.578669 systemd[1]: Started cri-containerd-ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1.scope - libcontainer container ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1. Sep 11 00:26:19.699123 containerd[1558]: time="2025-09-11T00:26:19.699048955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdxqm,Uid:d7168250-66d8-4de8-9e42-e712e1d18191,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1\"" Sep 11 00:26:20.842238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254070704.mount: Deactivated successfully. Sep 11 00:26:20.915277 kubelet[2726]: E0911 00:26:20.915205 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fj6sx" podUID="65b0dc7b-51bf-4c45-8124-dc6f83a69633" Sep 11 00:26:22.567112 containerd[1558]: time="2025-09-11T00:26:22.567044603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:22.568344 containerd[1558]: time="2025-09-11T00:26:22.568311791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:26:22.569542 containerd[1558]: time="2025-09-11T00:26:22.569510811Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:22.571462 containerd[1558]: time="2025-09-11T00:26:22.571438163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:22.572014 containerd[1558]: time="2025-09-11T00:26:22.571962512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.247021353s" Sep 11 00:26:22.572014 containerd[1558]: time="2025-09-11T00:26:22.572011194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:26:22.573972 containerd[1558]: time="2025-09-11T00:26:22.573723842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:26:22.581857 containerd[1558]: time="2025-09-11T00:26:22.581816704Z" level=info msg="CreateContainer within sandbox \"383122ea5d212079b558d6b401829bc34b28cee63324725abe32383ab88179bd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:26:22.591459 containerd[1558]: time="2025-09-11T00:26:22.591234505Z" level=info msg="Container d2fb3800a50475826e1e15b2fa8e26217df2cb495babeecd6c2fa2e97b21b9ac: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:22.609303 containerd[1558]: time="2025-09-11T00:26:22.609253336Z" level=info msg="CreateContainer within sandbox \"383122ea5d212079b558d6b401829bc34b28cee63324725abe32383ab88179bd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d2fb3800a50475826e1e15b2fa8e26217df2cb495babeecd6c2fa2e97b21b9ac\"" Sep 11 00:26:22.609704 containerd[1558]: time="2025-09-11T00:26:22.609673518Z" level=info msg="StartContainer for \"d2fb3800a50475826e1e15b2fa8e26217df2cb495babeecd6c2fa2e97b21b9ac\"" Sep 11 00:26:22.610755 containerd[1558]: time="2025-09-11T00:26:22.610730310Z" level=info msg="connecting to shim d2fb3800a50475826e1e15b2fa8e26217df2cb495babeecd6c2fa2e97b21b9ac" address="unix:///run/containerd/s/4891c376b1fb1d418e65935a66e3c8a2a625c7ead6942e76797976eae95df14d" protocol=ttrpc version=3 Sep 11 00:26:22.631574 systemd[1]: Started cri-containerd-d2fb3800a50475826e1e15b2fa8e26217df2cb495babeecd6c2fa2e97b21b9ac.scope - libcontainer container d2fb3800a50475826e1e15b2fa8e26217df2cb495babeecd6c2fa2e97b21b9ac. Sep 11 00:26:22.683937 containerd[1558]: time="2025-09-11T00:26:22.683880150Z" level=info msg="StartContainer for \"d2fb3800a50475826e1e15b2fa8e26217df2cb495babeecd6c2fa2e97b21b9ac\" returns successfully" Sep 11 00:26:22.916177 kubelet[2726]: E0911 00:26:22.916027 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fj6sx" podUID="65b0dc7b-51bf-4c45-8124-dc6f83a69633" Sep 11 00:26:22.983780 kubelet[2726]: E0911 00:26:22.983743 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:23.011623 kubelet[2726]: E0911 00:26:23.011572 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.011623 kubelet[2726]: W0911 00:26:23.011597 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.011623 kubelet[2726]: E0911 00:26:23.011617 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.011900 kubelet[2726]: E0911 00:26:23.011881 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.011900 kubelet[2726]: W0911 00:26:23.011895 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.011966 kubelet[2726]: E0911 00:26:23.011905 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.012103 kubelet[2726]: E0911 00:26:23.012075 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.012103 kubelet[2726]: W0911 00:26:23.012089 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.012103 kubelet[2726]: E0911 00:26:23.012099 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.012316 kubelet[2726]: E0911 00:26:23.012297 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.012316 kubelet[2726]: W0911 00:26:23.012311 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.012410 kubelet[2726]: E0911 00:26:23.012322 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.012577 kubelet[2726]: E0911 00:26:23.012552 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.012577 kubelet[2726]: W0911 00:26:23.012567 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.012577 kubelet[2726]: E0911 00:26:23.012578 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.012795 kubelet[2726]: E0911 00:26:23.012768 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.012795 kubelet[2726]: W0911 00:26:23.012792 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.012853 kubelet[2726]: E0911 00:26:23.012803 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.012998 kubelet[2726]: E0911 00:26:23.012980 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.012998 kubelet[2726]: W0911 00:26:23.012995 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.013072 kubelet[2726]: E0911 00:26:23.013005 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.013204 kubelet[2726]: E0911 00:26:23.013188 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.013204 kubelet[2726]: W0911 00:26:23.013201 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.013278 kubelet[2726]: E0911 00:26:23.013211 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.013440 kubelet[2726]: E0911 00:26:23.013412 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.013496 kubelet[2726]: W0911 00:26:23.013446 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.013496 kubelet[2726]: E0911 00:26:23.013459 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.013686 kubelet[2726]: E0911 00:26:23.013657 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.013686 kubelet[2726]: W0911 00:26:23.013673 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.013686 kubelet[2726]: E0911 00:26:23.013684 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.013892 kubelet[2726]: E0911 00:26:23.013873 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.013892 kubelet[2726]: W0911 00:26:23.013888 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.013951 kubelet[2726]: E0911 00:26:23.013898 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.014087 kubelet[2726]: E0911 00:26:23.014070 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.014087 kubelet[2726]: W0911 00:26:23.014083 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.014133 kubelet[2726]: E0911 00:26:23.014093 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.014295 kubelet[2726]: E0911 00:26:23.014278 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.014295 kubelet[2726]: W0911 00:26:23.014291 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.014338 kubelet[2726]: E0911 00:26:23.014301 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.014516 kubelet[2726]: E0911 00:26:23.014499 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.014516 kubelet[2726]: W0911 00:26:23.014513 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.014571 kubelet[2726]: E0911 00:26:23.014523 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.014709 kubelet[2726]: E0911 00:26:23.014692 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.014709 kubelet[2726]: W0911 00:26:23.014706 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.014762 kubelet[2726]: E0911 00:26:23.014716 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.027186 kubelet[2726]: E0911 00:26:23.027139 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.027186 kubelet[2726]: W0911 00:26:23.027161 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.027186 kubelet[2726]: E0911 00:26:23.027174 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.027438 kubelet[2726]: E0911 00:26:23.027394 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.027438 kubelet[2726]: W0911 00:26:23.027412 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.027500 kubelet[2726]: E0911 00:26:23.027447 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.027695 kubelet[2726]: E0911 00:26:23.027672 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.027695 kubelet[2726]: W0911 00:26:23.027693 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.027845 kubelet[2726]: E0911 00:26:23.027712 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.027921 kubelet[2726]: E0911 00:26:23.027892 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.027921 kubelet[2726]: W0911 00:26:23.027904 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.027976 kubelet[2726]: E0911 00:26:23.027926 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.028091 kubelet[2726]: E0911 00:26:23.028076 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.028091 kubelet[2726]: W0911 00:26:23.028086 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.028152 kubelet[2726]: E0911 00:26:23.028099 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.028289 kubelet[2726]: E0911 00:26:23.028263 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.028289 kubelet[2726]: W0911 00:26:23.028274 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.028289 kubelet[2726]: E0911 00:26:23.028285 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.028609 kubelet[2726]: E0911 00:26:23.028587 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.028609 kubelet[2726]: W0911 00:26:23.028605 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.028673 kubelet[2726]: E0911 00:26:23.028623 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.028859 kubelet[2726]: E0911 00:26:23.028834 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.028859 kubelet[2726]: W0911 00:26:23.028848 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.028912 kubelet[2726]: E0911 00:26:23.028862 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.029093 kubelet[2726]: E0911 00:26:23.029075 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.029093 kubelet[2726]: W0911 00:26:23.029089 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.029142 kubelet[2726]: E0911 00:26:23.029105 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.029339 kubelet[2726]: E0911 00:26:23.029306 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.029339 kubelet[2726]: W0911 00:26:23.029321 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.029339 kubelet[2726]: E0911 00:26:23.029338 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.029623 kubelet[2726]: E0911 00:26:23.029555 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.029623 kubelet[2726]: W0911 00:26:23.029566 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.029623 kubelet[2726]: E0911 00:26:23.029582 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.029784 kubelet[2726]: E0911 00:26:23.029765 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.029784 kubelet[2726]: W0911 00:26:23.029776 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.029863 kubelet[2726]: E0911 00:26:23.029791 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.029998 kubelet[2726]: E0911 00:26:23.029982 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.029998 kubelet[2726]: W0911 00:26:23.029993 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.030066 kubelet[2726]: E0911 00:26:23.030007 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.030181 kubelet[2726]: E0911 00:26:23.030167 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.030181 kubelet[2726]: W0911 00:26:23.030176 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.030254 kubelet[2726]: E0911 00:26:23.030188 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.030371 kubelet[2726]: E0911 00:26:23.030356 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.030371 kubelet[2726]: W0911 00:26:23.030366 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.030461 kubelet[2726]: E0911 00:26:23.030377 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.030675 kubelet[2726]: E0911 00:26:23.030656 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.030675 kubelet[2726]: W0911 00:26:23.030668 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.030767 kubelet[2726]: E0911 00:26:23.030684 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.030945 kubelet[2726]: E0911 00:26:23.030928 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.030945 kubelet[2726]: W0911 00:26:23.030939 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.031017 kubelet[2726]: E0911 00:26:23.030951 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.031123 kubelet[2726]: E0911 00:26:23.031110 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:23.031123 kubelet[2726]: W0911 00:26:23.031120 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:23.031189 kubelet[2726]: E0911 00:26:23.031128 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:23.986351 kubelet[2726]: I0911 00:26:23.986311 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:26:23.990193 kubelet[2726]: E0911 00:26:23.989888 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:24.020836 kubelet[2726]: E0911 00:26:24.020770 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.020836 kubelet[2726]: W0911 00:26:24.020803 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.020836 kubelet[2726]: E0911 00:26:24.020828 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.021151 kubelet[2726]: E0911 00:26:24.021017 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.021151 kubelet[2726]: W0911 00:26:24.021026 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.021151 kubelet[2726]: E0911 00:26:24.021037 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.021258 kubelet[2726]: E0911 00:26:24.021241 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.021258 kubelet[2726]: W0911 00:26:24.021256 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.021321 kubelet[2726]: E0911 00:26:24.021265 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.021511 kubelet[2726]: E0911 00:26:24.021489 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.021511 kubelet[2726]: W0911 00:26:24.021505 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.021614 kubelet[2726]: E0911 00:26:24.021520 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.021743 kubelet[2726]: E0911 00:26:24.021726 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.021743 kubelet[2726]: W0911 00:26:24.021739 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.021809 kubelet[2726]: E0911 00:26:24.021749 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.021939 kubelet[2726]: E0911 00:26:24.021923 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.021939 kubelet[2726]: W0911 00:26:24.021935 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.022002 kubelet[2726]: E0911 00:26:24.021945 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.022137 kubelet[2726]: E0911 00:26:24.022121 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.022137 kubelet[2726]: W0911 00:26:24.022133 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.022207 kubelet[2726]: E0911 00:26:24.022143 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.022331 kubelet[2726]: E0911 00:26:24.022315 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.022331 kubelet[2726]: W0911 00:26:24.022327 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.022462 kubelet[2726]: E0911 00:26:24.022337 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.022572 kubelet[2726]: E0911 00:26:24.022543 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.022572 kubelet[2726]: W0911 00:26:24.022567 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.022655 kubelet[2726]: E0911 00:26:24.022578 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.022881 kubelet[2726]: E0911 00:26:24.022864 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.022881 kubelet[2726]: W0911 00:26:24.022877 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.022947 kubelet[2726]: E0911 00:26:24.022886 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.023069 kubelet[2726]: E0911 00:26:24.023053 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.023069 kubelet[2726]: W0911 00:26:24.023065 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.023139 kubelet[2726]: E0911 00:26:24.023075 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.023269 kubelet[2726]: E0911 00:26:24.023254 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.023269 kubelet[2726]: W0911 00:26:24.023266 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.023336 kubelet[2726]: E0911 00:26:24.023275 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.023492 kubelet[2726]: E0911 00:26:24.023476 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.023492 kubelet[2726]: W0911 00:26:24.023488 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.023564 kubelet[2726]: E0911 00:26:24.023498 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.023688 kubelet[2726]: E0911 00:26:24.023673 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.023688 kubelet[2726]: W0911 00:26:24.023685 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.023748 kubelet[2726]: E0911 00:26:24.023694 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.023880 kubelet[2726]: E0911 00:26:24.023865 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.023880 kubelet[2726]: W0911 00:26:24.023876 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.023941 kubelet[2726]: E0911 00:26:24.023885 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.033978 kubelet[2726]: E0911 00:26:24.033941 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.033978 kubelet[2726]: W0911 00:26:24.033966 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.033978 kubelet[2726]: E0911 00:26:24.033987 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.034232 kubelet[2726]: E0911 00:26:24.034215 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.034232 kubelet[2726]: W0911 00:26:24.034230 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.034318 kubelet[2726]: E0911 00:26:24.034248 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.034554 kubelet[2726]: E0911 00:26:24.034535 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.034554 kubelet[2726]: W0911 00:26:24.034549 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.034661 kubelet[2726]: E0911 00:26:24.034566 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.034768 kubelet[2726]: E0911 00:26:24.034752 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.034768 kubelet[2726]: W0911 00:26:24.034765 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.034839 kubelet[2726]: E0911 00:26:24.034780 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.035025 kubelet[2726]: E0911 00:26:24.034998 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.035025 kubelet[2726]: W0911 00:26:24.035009 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.035025 kubelet[2726]: E0911 00:26:24.035023 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.035237 kubelet[2726]: E0911 00:26:24.035205 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.035237 kubelet[2726]: W0911 00:26:24.035223 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.035237 kubelet[2726]: E0911 00:26:24.035236 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.035495 kubelet[2726]: E0911 00:26:24.035468 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.035495 kubelet[2726]: W0911 00:26:24.035483 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.035579 kubelet[2726]: E0911 00:26:24.035499 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.035713 kubelet[2726]: E0911 00:26:24.035694 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.035713 kubelet[2726]: W0911 00:26:24.035707 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.035793 kubelet[2726]: E0911 00:26:24.035724 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.035915 kubelet[2726]: E0911 00:26:24.035896 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.035915 kubelet[2726]: W0911 00:26:24.035908 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.035991 kubelet[2726]: E0911 00:26:24.035938 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.036106 kubelet[2726]: E0911 00:26:24.036087 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.036106 kubelet[2726]: W0911 00:26:24.036098 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.036192 kubelet[2726]: E0911 00:26:24.036126 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.036319 kubelet[2726]: E0911 00:26:24.036301 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.036319 kubelet[2726]: W0911 00:26:24.036313 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.036412 kubelet[2726]: E0911 00:26:24.036330 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.036609 kubelet[2726]: E0911 00:26:24.036588 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.036609 kubelet[2726]: W0911 00:26:24.036604 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.036698 kubelet[2726]: E0911 00:26:24.036622 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.036876 kubelet[2726]: E0911 00:26:24.036860 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.036876 kubelet[2726]: W0911 00:26:24.036873 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.036993 kubelet[2726]: E0911 00:26:24.036889 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.037149 kubelet[2726]: E0911 00:26:24.037128 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.037149 kubelet[2726]: W0911 00:26:24.037143 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.037238 kubelet[2726]: E0911 00:26:24.037164 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.037382 kubelet[2726]: E0911 00:26:24.037365 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.037382 kubelet[2726]: W0911 00:26:24.037377 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.037491 kubelet[2726]: E0911 00:26:24.037395 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.037641 kubelet[2726]: E0911 00:26:24.037621 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.037641 kubelet[2726]: W0911 00:26:24.037634 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.037727 kubelet[2726]: E0911 00:26:24.037651 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.037840 kubelet[2726]: E0911 00:26:24.037823 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.037840 kubelet[2726]: W0911 00:26:24.037835 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.037911 kubelet[2726]: E0911 00:26:24.037852 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.038050 kubelet[2726]: E0911 00:26:24.038031 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:26:24.038050 kubelet[2726]: W0911 00:26:24.038045 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:26:24.038123 kubelet[2726]: E0911 00:26:24.038057 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:26:24.205607 containerd[1558]: time="2025-09-11T00:26:24.205539278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:24.206647 containerd[1558]: time="2025-09-11T00:26:24.206576582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:26:24.208098 containerd[1558]: time="2025-09-11T00:26:24.208044487Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:24.210268 containerd[1558]: time="2025-09-11T00:26:24.210216328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:24.210995 containerd[1558]: time="2025-09-11T00:26:24.210923089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.637154845s" Sep 11 00:26:24.210995 containerd[1558]: time="2025-09-11T00:26:24.210978724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:26:24.213518 containerd[1558]: time="2025-09-11T00:26:24.213471119Z" level=info msg="CreateContainer within sandbox \"ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:26:24.225333 containerd[1558]: time="2025-09-11T00:26:24.225269824Z" level=info msg="Container 17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:24.236323 containerd[1558]: time="2025-09-11T00:26:24.236261709Z" level=info msg="CreateContainer within sandbox \"ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12\"" Sep 11 00:26:24.237057 containerd[1558]: time="2025-09-11T00:26:24.236926131Z" level=info msg="StartContainer for \"17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12\"" Sep 11 00:26:24.238940 containerd[1558]: time="2025-09-11T00:26:24.238893466Z" level=info msg="connecting to shim 17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12" address="unix:///run/containerd/s/dc20afab4a3c6efdc1ad2ecd0a617858cc72efcfb110e86e09df95d411cab4a7" protocol=ttrpc version=3 Sep 11 00:26:24.270681 systemd[1]: Started cri-containerd-17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12.scope - libcontainer container 17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12. Sep 11 00:26:24.324547 containerd[1558]: time="2025-09-11T00:26:24.324497115Z" level=info msg="StartContainer for \"17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12\" returns successfully" Sep 11 00:26:24.335633 systemd[1]: cri-containerd-17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12.scope: Deactivated successfully. Sep 11 00:26:24.337803 containerd[1558]: time="2025-09-11T00:26:24.337752934Z" level=info msg="received exit event container_id:\"17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12\" id:\"17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12\" pid:3433 exited_at:{seconds:1757550384 nanos:337157152}" Sep 11 00:26:24.337865 containerd[1558]: time="2025-09-11T00:26:24.337825731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12\" id:\"17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12\" pid:3433 exited_at:{seconds:1757550384 nanos:337157152}" Sep 11 00:26:24.366549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17fbdd0db2ea7fc4809d0e5fdd693a457a578d7c34a9f1cf224c458a74552e12-rootfs.mount: Deactivated successfully. Sep 11 00:26:24.915590 kubelet[2726]: E0911 00:26:24.915499 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fj6sx" podUID="65b0dc7b-51bf-4c45-8124-dc6f83a69633" Sep 11 00:26:24.991628 containerd[1558]: time="2025-09-11T00:26:24.991391208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:26:25.007013 kubelet[2726]: I0911 00:26:25.006713 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66f9ddd4c6-2cr2l" podStartSLOduration=3.757327444 podStartE2EDuration="7.006680875s" podCreationTimestamp="2025-09-11 00:26:18 +0000 UTC" firstStartedPulling="2025-09-11 00:26:19.32347864 +0000 UTC m=+17.512525291" lastFinishedPulling="2025-09-11 00:26:22.572832071 +0000 UTC m=+20.761878722" observedRunningTime="2025-09-11 00:26:22.99938519 +0000 UTC m=+21.188431841" watchObservedRunningTime="2025-09-11 00:26:25.006680875 +0000 UTC m=+23.195727526" Sep 11 00:26:26.915705 kubelet[2726]: E0911 00:26:26.915649 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fj6sx" podUID="65b0dc7b-51bf-4c45-8124-dc6f83a69633" Sep 11 00:26:28.904027 containerd[1558]: time="2025-09-11T00:26:28.903956317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:28.904775 containerd[1558]: time="2025-09-11T00:26:28.904740924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:26:28.906191 containerd[1558]: time="2025-09-11T00:26:28.906131582Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:28.908179 containerd[1558]: time="2025-09-11T00:26:28.908138348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:28.908713 containerd[1558]: time="2025-09-11T00:26:28.908679235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.917180845s" Sep 11 00:26:28.908713 containerd[1558]: time="2025-09-11T00:26:28.908706337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:26:28.911165 containerd[1558]: time="2025-09-11T00:26:28.911115290Z" level=info msg="CreateContainer within sandbox \"ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:26:28.916105 kubelet[2726]: E0911 00:26:28.915761 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fj6sx" podUID="65b0dc7b-51bf-4c45-8124-dc6f83a69633" Sep 11 00:26:28.922250 containerd[1558]: time="2025-09-11T00:26:28.922192306Z" level=info msg="Container a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:28.932246 containerd[1558]: time="2025-09-11T00:26:28.932200569Z" level=info msg="CreateContainer within sandbox \"ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d\"" Sep 11 00:26:28.932905 containerd[1558]: time="2025-09-11T00:26:28.932849490Z" level=info msg="StartContainer for \"a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d\"" Sep 11 00:26:28.934488 containerd[1558]: time="2025-09-11T00:26:28.934456795Z" level=info msg="connecting to shim a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d" address="unix:///run/containerd/s/dc20afab4a3c6efdc1ad2ecd0a617858cc72efcfb110e86e09df95d411cab4a7" protocol=ttrpc version=3 Sep 11 00:26:28.964715 systemd[1]: Started cri-containerd-a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d.scope - libcontainer container a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d. Sep 11 00:26:29.023991 containerd[1558]: time="2025-09-11T00:26:29.023943257Z" level=info msg="StartContainer for \"a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d\" returns successfully" Sep 11 00:26:30.806880 kubelet[2726]: I0911 00:26:30.806838 2726 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 11 00:26:30.808494 systemd[1]: cri-containerd-a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d.scope: Deactivated successfully. Sep 11 00:26:30.809246 systemd[1]: cri-containerd-a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d.scope: Consumed 637ms CPU time, 181.5M memory peak, 3.5M read from disk, 171.3M written to disk. Sep 11 00:26:30.811506 containerd[1558]: time="2025-09-11T00:26:30.811459336Z" level=info msg="received exit event container_id:\"a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d\" id:\"a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d\" pid:3495 exited_at:{seconds:1757550390 nanos:811180071}" Sep 11 00:26:30.812084 containerd[1558]: time="2025-09-11T00:26:30.811606142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d\" id:\"a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d\" pid:3495 exited_at:{seconds:1757550390 nanos:811180071}" Sep 11 00:26:30.853692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1a499f1dafa299ee539cb8b9f779411eb4f495164f9306a0a551487023d163d-rootfs.mount: Deactivated successfully. Sep 11 00:26:30.870810 systemd[1]: Created slice kubepods-burstable-pod84a8df4f_888e_427e_a9f1_99af2a5ac3e2.slice - libcontainer container kubepods-burstable-pod84a8df4f_888e_427e_a9f1_99af2a5ac3e2.slice. Sep 11 00:26:30.876880 systemd[1]: Created slice kubepods-besteffort-pod9725caed_043b_47ac_99de_adfd77bed83c.slice - libcontainer container kubepods-besteffort-pod9725caed_043b_47ac_99de_adfd77bed83c.slice. Sep 11 00:26:30.892043 kubelet[2726]: I0911 00:26:30.876792 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e-goldmane-key-pair\") pod \"goldmane-7988f88666-gjlvv\" (UID: \"a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e\") " pod="calico-system/goldmane-7988f88666-gjlvv" Sep 11 00:26:30.892043 kubelet[2726]: I0911 00:26:30.876830 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e244902-62e4-48ca-b740-012a5f052e01-config-volume\") pod \"coredns-7c65d6cfc9-dk9vh\" (UID: \"9e244902-62e4-48ca-b740-012a5f052e01\") " pod="kube-system/coredns-7c65d6cfc9-dk9vh" Sep 11 00:26:30.892043 kubelet[2726]: I0911 00:26:30.876852 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d175666-b65d-40a4-967c-8028874965d4-calico-apiserver-certs\") pod \"calico-apiserver-5f9847cbbd-zvw28\" (UID: \"5d175666-b65d-40a4-967c-8028874965d4\") " pod="calico-apiserver/calico-apiserver-5f9847cbbd-zvw28" Sep 11 00:26:30.892043 kubelet[2726]: I0911 00:26:30.876874 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/798a8e8f-61de-4cc6-94c8-5e67c1db36b9-calico-apiserver-certs\") pod \"calico-apiserver-5f9847cbbd-g2bf9\" (UID: \"798a8e8f-61de-4cc6-94c8-5e67c1db36b9\") " pod="calico-apiserver/calico-apiserver-5f9847cbbd-g2bf9" Sep 11 00:26:30.892043 kubelet[2726]: I0911 00:26:30.876895 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5c44\" (UniqueName: \"kubernetes.io/projected/798a8e8f-61de-4cc6-94c8-5e67c1db36b9-kube-api-access-c5c44\") pod \"calico-apiserver-5f9847cbbd-g2bf9\" (UID: \"798a8e8f-61de-4cc6-94c8-5e67c1db36b9\") " pod="calico-apiserver/calico-apiserver-5f9847cbbd-g2bf9" Sep 11 00:26:30.886357 systemd[1]: Created slice kubepods-besteffort-pod3f5e48d0_599f_43c2_ae6a_e0897ce8a3cf.slice - libcontainer container kubepods-besteffort-pod3f5e48d0_599f_43c2_ae6a_e0897ce8a3cf.slice. Sep 11 00:26:30.892287 kubelet[2726]: I0911 00:26:30.876914 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9725caed-043b-47ac-99de-adfd77bed83c-whisker-ca-bundle\") pod \"whisker-6cd44c4499-lc92p\" (UID: \"9725caed-043b-47ac-99de-adfd77bed83c\") " pod="calico-system/whisker-6cd44c4499-lc92p" Sep 11 00:26:30.892287 kubelet[2726]: I0911 00:26:30.876928 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf-tigera-ca-bundle\") pod \"calico-kube-controllers-795668b449-pdwm4\" (UID: \"3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf\") " pod="calico-system/calico-kube-controllers-795668b449-pdwm4" Sep 11 00:26:30.892287 kubelet[2726]: I0911 00:26:30.876944 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e-config\") pod \"goldmane-7988f88666-gjlvv\" (UID: \"a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e\") " pod="calico-system/goldmane-7988f88666-gjlvv" Sep 11 00:26:30.892287 kubelet[2726]: I0911 00:26:30.876958 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9725caed-043b-47ac-99de-adfd77bed83c-whisker-backend-key-pair\") pod \"whisker-6cd44c4499-lc92p\" (UID: \"9725caed-043b-47ac-99de-adfd77bed83c\") " pod="calico-system/whisker-6cd44c4499-lc92p" Sep 11 00:26:30.892287 kubelet[2726]: I0911 00:26:30.876972 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84a8df4f-888e-427e-a9f1-99af2a5ac3e2-config-volume\") pod \"coredns-7c65d6cfc9-7kwsx\" (UID: \"84a8df4f-888e-427e-a9f1-99af2a5ac3e2\") " pod="kube-system/coredns-7c65d6cfc9-7kwsx" Sep 11 00:26:30.892454 kubelet[2726]: I0911 00:26:30.876986 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jf4l\" (UniqueName: \"kubernetes.io/projected/3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf-kube-api-access-9jf4l\") pod \"calico-kube-controllers-795668b449-pdwm4\" (UID: \"3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf\") " pod="calico-system/calico-kube-controllers-795668b449-pdwm4" Sep 11 00:26:30.892454 kubelet[2726]: I0911 00:26:30.877004 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65s4l\" (UniqueName: \"kubernetes.io/projected/a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e-kube-api-access-65s4l\") pod \"goldmane-7988f88666-gjlvv\" (UID: \"a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e\") " pod="calico-system/goldmane-7988f88666-gjlvv" Sep 11 00:26:30.892454 kubelet[2726]: I0911 00:26:30.877020 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hss\" (UniqueName: \"kubernetes.io/projected/9e244902-62e4-48ca-b740-012a5f052e01-kube-api-access-g7hss\") pod \"coredns-7c65d6cfc9-dk9vh\" (UID: \"9e244902-62e4-48ca-b740-012a5f052e01\") " pod="kube-system/coredns-7c65d6cfc9-dk9vh" Sep 11 00:26:30.892454 kubelet[2726]: I0911 00:26:30.877036 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e-goldmane-ca-bundle\") pod \"goldmane-7988f88666-gjlvv\" (UID: \"a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e\") " pod="calico-system/goldmane-7988f88666-gjlvv" Sep 11 00:26:30.892454 kubelet[2726]: I0911 00:26:30.877051 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh6lb\" (UniqueName: \"kubernetes.io/projected/84a8df4f-888e-427e-a9f1-99af2a5ac3e2-kube-api-access-vh6lb\") pod \"coredns-7c65d6cfc9-7kwsx\" (UID: \"84a8df4f-888e-427e-a9f1-99af2a5ac3e2\") " pod="kube-system/coredns-7c65d6cfc9-7kwsx" Sep 11 00:26:30.892588 kubelet[2726]: I0911 00:26:30.877067 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxx7\" (UniqueName: \"kubernetes.io/projected/5d175666-b65d-40a4-967c-8028874965d4-kube-api-access-rfxx7\") pod \"calico-apiserver-5f9847cbbd-zvw28\" (UID: \"5d175666-b65d-40a4-967c-8028874965d4\") " pod="calico-apiserver/calico-apiserver-5f9847cbbd-zvw28" Sep 11 00:26:30.892588 kubelet[2726]: I0911 00:26:30.877082 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbww\" (UniqueName: \"kubernetes.io/projected/9725caed-043b-47ac-99de-adfd77bed83c-kube-api-access-tvbww\") pod \"whisker-6cd44c4499-lc92p\" (UID: \"9725caed-043b-47ac-99de-adfd77bed83c\") " pod="calico-system/whisker-6cd44c4499-lc92p" Sep 11 00:26:30.894363 systemd[1]: Created slice kubepods-besteffort-poda3e3f494_7c5a_4ad7_9bfc_e9b4b133bf7e.slice - libcontainer container kubepods-besteffort-poda3e3f494_7c5a_4ad7_9bfc_e9b4b133bf7e.slice. Sep 11 00:26:30.903327 systemd[1]: Created slice kubepods-burstable-pod9e244902_62e4_48ca_b740_012a5f052e01.slice - libcontainer container kubepods-burstable-pod9e244902_62e4_48ca_b740_012a5f052e01.slice. Sep 11 00:26:30.908722 systemd[1]: Created slice kubepods-besteffort-pod5d175666_b65d_40a4_967c_8028874965d4.slice - libcontainer container kubepods-besteffort-pod5d175666_b65d_40a4_967c_8028874965d4.slice. Sep 11 00:26:30.913276 systemd[1]: Created slice kubepods-besteffort-pod798a8e8f_61de_4cc6_94c8_5e67c1db36b9.slice - libcontainer container kubepods-besteffort-pod798a8e8f_61de_4cc6_94c8_5e67c1db36b9.slice. Sep 11 00:26:30.920590 systemd[1]: Created slice kubepods-besteffort-pod65b0dc7b_51bf_4c45_8124_dc6f83a69633.slice - libcontainer container kubepods-besteffort-pod65b0dc7b_51bf_4c45_8124_dc6f83a69633.slice. Sep 11 00:26:30.923275 containerd[1558]: time="2025-09-11T00:26:30.923232713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fj6sx,Uid:65b0dc7b-51bf-4c45-8124-dc6f83a69633,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:31.194053 containerd[1558]: time="2025-09-11T00:26:31.193906858Z" level=error msg="Failed to destroy network for sandbox \"e045f911754d0c82b89e26c99e22a21b3496efa8acbdff52cf4b2f00d7681ddd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.195592 kubelet[2726]: E0911 00:26:31.195545 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:31.196730 containerd[1558]: time="2025-09-11T00:26:31.196076509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd44c4499-lc92p,Uid:9725caed-043b-47ac-99de-adfd77bed83c,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:31.196730 containerd[1558]: time="2025-09-11T00:26:31.196270884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fj6sx,Uid:65b0dc7b-51bf-4c45-8124-dc6f83a69633,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045f911754d0c82b89e26c99e22a21b3496efa8acbdff52cf4b2f00d7681ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.196730 containerd[1558]: time="2025-09-11T00:26:31.196491669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7kwsx,Uid:84a8df4f-888e-427e-a9f1-99af2a5ac3e2,Namespace:kube-system,Attempt:0,}" Sep 11 00:26:31.196730 containerd[1558]: time="2025-09-11T00:26:31.196608349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795668b449-pdwm4,Uid:3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:31.197280 kubelet[2726]: E0911 00:26:31.197045 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045f911754d0c82b89e26c99e22a21b3496efa8acbdff52cf4b2f00d7681ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.197280 kubelet[2726]: E0911 00:26:31.197139 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045f911754d0c82b89e26c99e22a21b3496efa8acbdff52cf4b2f00d7681ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fj6sx" Sep 11 00:26:31.197280 kubelet[2726]: E0911 00:26:31.197168 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e045f911754d0c82b89e26c99e22a21b3496efa8acbdff52cf4b2f00d7681ddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fj6sx" Sep 11 00:26:31.197516 kubelet[2726]: E0911 00:26:31.197213 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fj6sx_calico-system(65b0dc7b-51bf-4c45-8124-dc6f83a69633)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fj6sx_calico-system(65b0dc7b-51bf-4c45-8124-dc6f83a69633)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e045f911754d0c82b89e26c99e22a21b3496efa8acbdff52cf4b2f00d7681ddd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fj6sx" podUID="65b0dc7b-51bf-4c45-8124-dc6f83a69633" Sep 11 00:26:31.200265 containerd[1558]: time="2025-09-11T00:26:31.200218639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gjlvv,Uid:a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:31.206226 kubelet[2726]: E0911 00:26:31.206166 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:31.208129 containerd[1558]: time="2025-09-11T00:26:31.208037625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk9vh,Uid:9e244902-62e4-48ca-b740-012a5f052e01,Namespace:kube-system,Attempt:0,}" Sep 11 00:26:31.212808 containerd[1558]: time="2025-09-11T00:26:31.212752693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-zvw28,Uid:5d175666-b65d-40a4-967c-8028874965d4,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:26:31.216037 containerd[1558]: time="2025-09-11T00:26:31.215872211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-g2bf9,Uid:798a8e8f-61de-4cc6-94c8-5e67c1db36b9,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:26:31.294764 containerd[1558]: time="2025-09-11T00:26:31.294712701Z" level=error msg="Failed to destroy network for sandbox \"73d0225a937c6e064fa5a4b7284dda0c24dcd8c9757646ae9b47c724cf593b9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.298514 containerd[1558]: time="2025-09-11T00:26:31.298398273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd44c4499-lc92p,Uid:9725caed-043b-47ac-99de-adfd77bed83c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d0225a937c6e064fa5a4b7284dda0c24dcd8c9757646ae9b47c724cf593b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.299602 kubelet[2726]: E0911 00:26:31.299539 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d0225a937c6e064fa5a4b7284dda0c24dcd8c9757646ae9b47c724cf593b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.299680 kubelet[2726]: E0911 00:26:31.299618 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d0225a937c6e064fa5a4b7284dda0c24dcd8c9757646ae9b47c724cf593b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cd44c4499-lc92p" Sep 11 00:26:31.299680 kubelet[2726]: E0911 00:26:31.299643 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d0225a937c6e064fa5a4b7284dda0c24dcd8c9757646ae9b47c724cf593b9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cd44c4499-lc92p" Sep 11 00:26:31.299735 kubelet[2726]: E0911 00:26:31.299682 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cd44c4499-lc92p_calico-system(9725caed-043b-47ac-99de-adfd77bed83c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cd44c4499-lc92p_calico-system(9725caed-043b-47ac-99de-adfd77bed83c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73d0225a937c6e064fa5a4b7284dda0c24dcd8c9757646ae9b47c724cf593b9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cd44c4499-lc92p" podUID="9725caed-043b-47ac-99de-adfd77bed83c" Sep 11 00:26:31.308637 containerd[1558]: time="2025-09-11T00:26:31.308587567Z" level=error msg="Failed to destroy network for sandbox \"9d00f0a3d973603101c3d38afafa74919b4dc4bb8f0d963608c42712b3ba4116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.310030 containerd[1558]: time="2025-09-11T00:26:31.309965499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-zvw28,Uid:5d175666-b65d-40a4-967c-8028874965d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d00f0a3d973603101c3d38afafa74919b4dc4bb8f0d963608c42712b3ba4116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.310308 kubelet[2726]: E0911 00:26:31.310246 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d00f0a3d973603101c3d38afafa74919b4dc4bb8f0d963608c42712b3ba4116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.310489 kubelet[2726]: E0911 00:26:31.310336 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d00f0a3d973603101c3d38afafa74919b4dc4bb8f0d963608c42712b3ba4116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f9847cbbd-zvw28" Sep 11 00:26:31.310489 kubelet[2726]: E0911 00:26:31.310361 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d00f0a3d973603101c3d38afafa74919b4dc4bb8f0d963608c42712b3ba4116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f9847cbbd-zvw28" Sep 11 00:26:31.310489 kubelet[2726]: E0911 00:26:31.310411 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f9847cbbd-zvw28_calico-apiserver(5d175666-b65d-40a4-967c-8028874965d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f9847cbbd-zvw28_calico-apiserver(5d175666-b65d-40a4-967c-8028874965d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d00f0a3d973603101c3d38afafa74919b4dc4bb8f0d963608c42712b3ba4116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f9847cbbd-zvw28" podUID="5d175666-b65d-40a4-967c-8028874965d4" Sep 11 00:26:31.321701 containerd[1558]: time="2025-09-11T00:26:31.321565375Z" level=error msg="Failed to destroy network for sandbox \"8d2b5ab15dcdf1ecccaa31e68d9b8b531c0bf565874f9b6be5a4ddedd3beb325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.324071 containerd[1558]: time="2025-09-11T00:26:31.324043646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7kwsx,Uid:84a8df4f-888e-427e-a9f1-99af2a5ac3e2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d2b5ab15dcdf1ecccaa31e68d9b8b531c0bf565874f9b6be5a4ddedd3beb325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.324607 kubelet[2726]: E0911 00:26:31.324554 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d2b5ab15dcdf1ecccaa31e68d9b8b531c0bf565874f9b6be5a4ddedd3beb325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.324736 kubelet[2726]: E0911 00:26:31.324718 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d2b5ab15dcdf1ecccaa31e68d9b8b531c0bf565874f9b6be5a4ddedd3beb325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7kwsx" Sep 11 00:26:31.324815 kubelet[2726]: E0911 00:26:31.324799 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d2b5ab15dcdf1ecccaa31e68d9b8b531c0bf565874f9b6be5a4ddedd3beb325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7kwsx" Sep 11 00:26:31.324940 kubelet[2726]: E0911 00:26:31.324902 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7kwsx_kube-system(84a8df4f-888e-427e-a9f1-99af2a5ac3e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7kwsx_kube-system(84a8df4f-888e-427e-a9f1-99af2a5ac3e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d2b5ab15dcdf1ecccaa31e68d9b8b531c0bf565874f9b6be5a4ddedd3beb325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7kwsx" podUID="84a8df4f-888e-427e-a9f1-99af2a5ac3e2" Sep 11 00:26:31.328046 containerd[1558]: time="2025-09-11T00:26:31.327994958Z" level=error msg="Failed to destroy network for sandbox \"06b1b4776e58f4784ce8b56eeb7a556b17a8735db3b1b57e2730001b0ec1c624\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.329602 containerd[1558]: time="2025-09-11T00:26:31.329506170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795668b449-pdwm4,Uid:3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b1b4776e58f4784ce8b56eeb7a556b17a8735db3b1b57e2730001b0ec1c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.329770 kubelet[2726]: E0911 00:26:31.329731 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b1b4776e58f4784ce8b56eeb7a556b17a8735db3b1b57e2730001b0ec1c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.329819 kubelet[2726]: E0911 00:26:31.329792 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b1b4776e58f4784ce8b56eeb7a556b17a8735db3b1b57e2730001b0ec1c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795668b449-pdwm4" Sep 11 00:26:31.329819 kubelet[2726]: E0911 00:26:31.329812 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b1b4776e58f4784ce8b56eeb7a556b17a8735db3b1b57e2730001b0ec1c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795668b449-pdwm4" Sep 11 00:26:31.331565 kubelet[2726]: E0911 00:26:31.329849 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-795668b449-pdwm4_calico-system(3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-795668b449-pdwm4_calico-system(3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06b1b4776e58f4784ce8b56eeb7a556b17a8735db3b1b57e2730001b0ec1c624\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795668b449-pdwm4" podUID="3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf" Sep 11 00:26:31.340189 containerd[1558]: time="2025-09-11T00:26:31.340127646Z" level=error msg="Failed to destroy network for sandbox \"a53c07d153fdf37cb59e8f1fda5b566ed60ef2b1f668a3cf0bd5a1fd4971ef3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.341363 containerd[1558]: time="2025-09-11T00:26:31.341242824Z" level=error msg="Failed to destroy network for sandbox \"075637382b7f0879b72774b8737a9bf4cb6a02b410379b1eec5c69004b30087c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.341797 containerd[1558]: time="2025-09-11T00:26:31.341575670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk9vh,Uid:9e244902-62e4-48ca-b740-012a5f052e01,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53c07d153fdf37cb59e8f1fda5b566ed60ef2b1f668a3cf0bd5a1fd4971ef3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.342140 kubelet[2726]: E0911 00:26:31.342076 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53c07d153fdf37cb59e8f1fda5b566ed60ef2b1f668a3cf0bd5a1fd4971ef3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.342140 kubelet[2726]: E0911 00:26:31.342142 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53c07d153fdf37cb59e8f1fda5b566ed60ef2b1f668a3cf0bd5a1fd4971ef3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk9vh" Sep 11 00:26:31.342443 kubelet[2726]: E0911 00:26:31.342161 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53c07d153fdf37cb59e8f1fda5b566ed60ef2b1f668a3cf0bd5a1fd4971ef3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk9vh" Sep 11 00:26:31.342443 kubelet[2726]: E0911 00:26:31.342218 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk9vh_kube-system(9e244902-62e4-48ca-b740-012a5f052e01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk9vh_kube-system(9e244902-62e4-48ca-b740-012a5f052e01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a53c07d153fdf37cb59e8f1fda5b566ed60ef2b1f668a3cf0bd5a1fd4971ef3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk9vh" podUID="9e244902-62e4-48ca-b740-012a5f052e01" Sep 11 00:26:31.343098 containerd[1558]: time="2025-09-11T00:26:31.343064790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-g2bf9,Uid:798a8e8f-61de-4cc6-94c8-5e67c1db36b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"075637382b7f0879b72774b8737a9bf4cb6a02b410379b1eec5c69004b30087c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.343405 kubelet[2726]: E0911 00:26:31.343370 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075637382b7f0879b72774b8737a9bf4cb6a02b410379b1eec5c69004b30087c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.343660 kubelet[2726]: E0911 00:26:31.343528 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075637382b7f0879b72774b8737a9bf4cb6a02b410379b1eec5c69004b30087c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f9847cbbd-g2bf9" Sep 11 00:26:31.343660 kubelet[2726]: E0911 00:26:31.343559 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075637382b7f0879b72774b8737a9bf4cb6a02b410379b1eec5c69004b30087c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f9847cbbd-g2bf9" Sep 11 00:26:31.343860 kubelet[2726]: E0911 00:26:31.343785 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f9847cbbd-g2bf9_calico-apiserver(798a8e8f-61de-4cc6-94c8-5e67c1db36b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f9847cbbd-g2bf9_calico-apiserver(798a8e8f-61de-4cc6-94c8-5e67c1db36b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"075637382b7f0879b72774b8737a9bf4cb6a02b410379b1eec5c69004b30087c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f9847cbbd-g2bf9" podUID="798a8e8f-61de-4cc6-94c8-5e67c1db36b9" Sep 11 00:26:31.343941 containerd[1558]: time="2025-09-11T00:26:31.343896586Z" level=error msg="Failed to destroy network for sandbox \"d529fe64309ad130c0df7d73d5ce7dd9f2f953f649056c9e65829d76a329cb74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.345604 containerd[1558]: time="2025-09-11T00:26:31.345509709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gjlvv,Uid:a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d529fe64309ad130c0df7d73d5ce7dd9f2f953f649056c9e65829d76a329cb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.345788 kubelet[2726]: E0911 00:26:31.345729 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d529fe64309ad130c0df7d73d5ce7dd9f2f953f649056c9e65829d76a329cb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:26:31.345836 kubelet[2726]: E0911 00:26:31.345821 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d529fe64309ad130c0df7d73d5ce7dd9f2f953f649056c9e65829d76a329cb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gjlvv" Sep 11 00:26:31.345886 kubelet[2726]: E0911 00:26:31.345842 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d529fe64309ad130c0df7d73d5ce7dd9f2f953f649056c9e65829d76a329cb74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gjlvv" Sep 11 00:26:31.345886 kubelet[2726]: E0911 00:26:31.345884 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-gjlvv_calico-system(a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-gjlvv_calico-system(a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d529fe64309ad130c0df7d73d5ce7dd9f2f953f649056c9e65829d76a329cb74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gjlvv" podUID="a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e" Sep 11 00:26:31.856156 systemd[1]: run-netns-cni\x2dc09fc099\x2d59ca\x2d2db1\x2d4ebf\x2d4f9e0ce9effd.mount: Deactivated successfully. Sep 11 00:26:32.017372 containerd[1558]: time="2025-09-11T00:26:32.016987561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:26:37.512203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688199387.mount: Deactivated successfully. Sep 11 00:26:38.377307 containerd[1558]: time="2025-09-11T00:26:38.377239589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:38.378121 containerd[1558]: time="2025-09-11T00:26:38.378071002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:26:38.379540 containerd[1558]: time="2025-09-11T00:26:38.379494586Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:38.381979 containerd[1558]: time="2025-09-11T00:26:38.381906398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:38.382632 containerd[1558]: time="2025-09-11T00:26:38.382587157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.365507873s" Sep 11 00:26:38.382680 containerd[1558]: time="2025-09-11T00:26:38.382638233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:26:38.394900 containerd[1558]: time="2025-09-11T00:26:38.394759928Z" level=info msg="CreateContainer within sandbox \"ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:26:38.407850 containerd[1558]: time="2025-09-11T00:26:38.407793548Z" level=info msg="Container 4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:38.422266 containerd[1558]: time="2025-09-11T00:26:38.422209304Z" level=info msg="CreateContainer within sandbox \"ee4f4c4419bfb073570c76e740903bb98cc5a8edab3c7cbfd248e09348b205b1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4\"" Sep 11 00:26:38.424450 containerd[1558]: time="2025-09-11T00:26:38.422942391Z" level=info msg="StartContainer for \"4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4\"" Sep 11 00:26:38.424450 containerd[1558]: time="2025-09-11T00:26:38.424337674Z" level=info msg="connecting to shim 4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4" address="unix:///run/containerd/s/dc20afab4a3c6efdc1ad2ecd0a617858cc72efcfb110e86e09df95d411cab4a7" protocol=ttrpc version=3 Sep 11 00:26:38.443715 systemd[1]: Started cri-containerd-4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4.scope - libcontainer container 4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4. Sep 11 00:26:38.511311 containerd[1558]: time="2025-09-11T00:26:38.511249645Z" level=info msg="StartContainer for \"4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4\" returns successfully" Sep 11 00:26:38.582050 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:26:38.582159 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:26:38.727644 kubelet[2726]: I0911 00:26:38.727215 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvbww\" (UniqueName: \"kubernetes.io/projected/9725caed-043b-47ac-99de-adfd77bed83c-kube-api-access-tvbww\") pod \"9725caed-043b-47ac-99de-adfd77bed83c\" (UID: \"9725caed-043b-47ac-99de-adfd77bed83c\") " Sep 11 00:26:38.727644 kubelet[2726]: I0911 00:26:38.727263 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9725caed-043b-47ac-99de-adfd77bed83c-whisker-backend-key-pair\") pod \"9725caed-043b-47ac-99de-adfd77bed83c\" (UID: \"9725caed-043b-47ac-99de-adfd77bed83c\") " Sep 11 00:26:38.727644 kubelet[2726]: I0911 00:26:38.727284 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9725caed-043b-47ac-99de-adfd77bed83c-whisker-ca-bundle\") pod \"9725caed-043b-47ac-99de-adfd77bed83c\" (UID: \"9725caed-043b-47ac-99de-adfd77bed83c\") " Sep 11 00:26:38.728116 kubelet[2726]: I0911 00:26:38.727754 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9725caed-043b-47ac-99de-adfd77bed83c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9725caed-043b-47ac-99de-adfd77bed83c" (UID: "9725caed-043b-47ac-99de-adfd77bed83c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 00:26:38.736365 systemd[1]: var-lib-kubelet-pods-9725caed\x2d043b\x2d47ac\x2d99de\x2dadfd77bed83c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvbww.mount: Deactivated successfully. Sep 11 00:26:38.737773 systemd[1]: var-lib-kubelet-pods-9725caed\x2d043b\x2d47ac\x2d99de\x2dadfd77bed83c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:26:38.740399 kubelet[2726]: I0911 00:26:38.740211 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9725caed-043b-47ac-99de-adfd77bed83c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9725caed-043b-47ac-99de-adfd77bed83c" (UID: "9725caed-043b-47ac-99de-adfd77bed83c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 11 00:26:38.740399 kubelet[2726]: I0911 00:26:38.740360 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9725caed-043b-47ac-99de-adfd77bed83c-kube-api-access-tvbww" (OuterVolumeSpecName: "kube-api-access-tvbww") pod "9725caed-043b-47ac-99de-adfd77bed83c" (UID: "9725caed-043b-47ac-99de-adfd77bed83c"). InnerVolumeSpecName "kube-api-access-tvbww". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:26:38.827983 kubelet[2726]: I0911 00:26:38.827928 2726 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvbww\" (UniqueName: \"kubernetes.io/projected/9725caed-043b-47ac-99de-adfd77bed83c-kube-api-access-tvbww\") on node \"localhost\" DevicePath \"\"" Sep 11 00:26:38.828206 kubelet[2726]: I0911 00:26:38.828183 2726 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9725caed-043b-47ac-99de-adfd77bed83c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:26:38.828206 kubelet[2726]: I0911 00:26:38.828206 2726 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9725caed-043b-47ac-99de-adfd77bed83c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:26:39.058899 systemd[1]: Removed slice kubepods-besteffort-pod9725caed_043b_47ac_99de_adfd77bed83c.slice - libcontainer container kubepods-besteffort-pod9725caed_043b_47ac_99de_adfd77bed83c.slice. Sep 11 00:26:39.061547 kubelet[2726]: I0911 00:26:39.061452 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cdxqm" podStartSLOduration=1.378764057 podStartE2EDuration="20.061416091s" podCreationTimestamp="2025-09-11 00:26:19 +0000 UTC" firstStartedPulling="2025-09-11 00:26:19.700768299 +0000 UTC m=+17.889814950" lastFinishedPulling="2025-09-11 00:26:38.383420333 +0000 UTC m=+36.572466984" observedRunningTime="2025-09-11 00:26:39.060690738 +0000 UTC m=+37.249737389" watchObservedRunningTime="2025-09-11 00:26:39.061416091 +0000 UTC m=+37.250462742" Sep 11 00:26:39.106056 systemd[1]: Created slice kubepods-besteffort-pod7622aa29_c9b2_4a83_b0e3_b8ca3f89e3c0.slice - libcontainer container kubepods-besteffort-pod7622aa29_c9b2_4a83_b0e3_b8ca3f89e3c0.slice. Sep 11 00:26:39.129902 kubelet[2726]: I0911 00:26:39.129852 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0-whisker-backend-key-pair\") pod \"whisker-75478f78c8-td8h7\" (UID: \"7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0\") " pod="calico-system/whisker-75478f78c8-td8h7" Sep 11 00:26:39.129902 kubelet[2726]: I0911 00:26:39.129904 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhm9v\" (UniqueName: \"kubernetes.io/projected/7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0-kube-api-access-rhm9v\") pod \"whisker-75478f78c8-td8h7\" (UID: \"7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0\") " pod="calico-system/whisker-75478f78c8-td8h7" Sep 11 00:26:39.130132 kubelet[2726]: I0911 00:26:39.129953 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0-whisker-ca-bundle\") pod \"whisker-75478f78c8-td8h7\" (UID: \"7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0\") " pod="calico-system/whisker-75478f78c8-td8h7" Sep 11 00:26:39.411678 containerd[1558]: time="2025-09-11T00:26:39.411537826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75478f78c8-td8h7,Uid:7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:39.568623 systemd-networkd[1464]: calic95402e24ce: Link UP Sep 11 00:26:39.568929 systemd-networkd[1464]: calic95402e24ce: Gained carrier Sep 11 00:26:39.584662 containerd[1558]: 2025-09-11 00:26:39.434 [INFO][3879] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:26:39.584662 containerd[1558]: 2025-09-11 00:26:39.454 [INFO][3879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--75478f78c8--td8h7-eth0 whisker-75478f78c8- calico-system 7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0 878 0 2025-09-11 00:26:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75478f78c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-75478f78c8-td8h7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic95402e24ce [] [] }} ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-" Sep 11 00:26:39.584662 containerd[1558]: 2025-09-11 00:26:39.454 [INFO][3879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-eth0" Sep 11 00:26:39.584662 containerd[1558]: 2025-09-11 00:26:39.520 [INFO][3894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" HandleID="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Workload="localhost-k8s-whisker--75478f78c8--td8h7-eth0" Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.521 [INFO][3894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" HandleID="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Workload="localhost-k8s-whisker--75478f78c8--td8h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000409a80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-75478f78c8-td8h7", "timestamp":"2025-09-11 00:26:39.520349785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.521 [INFO][3894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.521 [INFO][3894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.521 [INFO][3894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.530 [INFO][3894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" host="localhost" Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.536 [INFO][3894] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.540 [INFO][3894] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.543 [INFO][3894] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.544 [INFO][3894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:39.584951 containerd[1558]: 2025-09-11 00:26:39.544 [INFO][3894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" host="localhost" Sep 11 00:26:39.585198 containerd[1558]: 2025-09-11 00:26:39.546 [INFO][3894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354 Sep 11 00:26:39.585198 containerd[1558]: 2025-09-11 00:26:39.549 [INFO][3894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" host="localhost" Sep 11 00:26:39.585198 containerd[1558]: 2025-09-11 00:26:39.556 [INFO][3894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" host="localhost" Sep 11 00:26:39.585198 containerd[1558]: 2025-09-11 00:26:39.556 [INFO][3894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" host="localhost" Sep 11 00:26:39.585198 containerd[1558]: 2025-09-11 00:26:39.556 [INFO][3894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:39.585198 containerd[1558]: 2025-09-11 00:26:39.556 [INFO][3894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" HandleID="k8s-pod-network.413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Workload="localhost-k8s-whisker--75478f78c8--td8h7-eth0" Sep 11 00:26:39.585318 containerd[1558]: 2025-09-11 00:26:39.560 [INFO][3879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75478f78c8--td8h7-eth0", GenerateName:"whisker-75478f78c8-", Namespace:"calico-system", SelfLink:"", UID:"7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75478f78c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-75478f78c8-td8h7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic95402e24ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:39.585318 containerd[1558]: 2025-09-11 00:26:39.560 [INFO][3879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-eth0" Sep 11 00:26:39.585399 containerd[1558]: 2025-09-11 00:26:39.560 [INFO][3879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic95402e24ce ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-eth0" Sep 11 00:26:39.585399 containerd[1558]: 2025-09-11 00:26:39.569 [INFO][3879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-eth0" Sep 11 00:26:39.585491 containerd[1558]: 2025-09-11 00:26:39.570 [INFO][3879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75478f78c8--td8h7-eth0", GenerateName:"whisker-75478f78c8-", Namespace:"calico-system", SelfLink:"", UID:"7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75478f78c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354", Pod:"whisker-75478f78c8-td8h7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic95402e24ce", MAC:"36:e1:69:c1:0c:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:39.585540 containerd[1558]: 2025-09-11 00:26:39.578 [INFO][3879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" Namespace="calico-system" Pod="whisker-75478f78c8-td8h7" WorkloadEndpoint="localhost-k8s-whisker--75478f78c8--td8h7-eth0" Sep 11 00:26:39.701334 containerd[1558]: time="2025-09-11T00:26:39.701151032Z" level=info msg="connecting to shim 413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354" address="unix:///run/containerd/s/94fc2a615682e2dda3868db340c240d965935bf910e115b3f360438e43251aa4" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:39.734774 systemd[1]: Started cri-containerd-413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354.scope - libcontainer container 413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354. Sep 11 00:26:39.748177 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:39.777642 containerd[1558]: time="2025-09-11T00:26:39.777600149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75478f78c8-td8h7,Uid:7622aa29-c9b2-4a83-b0e3-b8ca3f89e3c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354\"" Sep 11 00:26:39.779614 containerd[1558]: time="2025-09-11T00:26:39.779573477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:26:39.921458 kubelet[2726]: I0911 00:26:39.921071 2726 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9725caed-043b-47ac-99de-adfd77bed83c" path="/var/lib/kubelet/pods/9725caed-043b-47ac-99de-adfd77bed83c/volumes" Sep 11 00:26:41.599726 systemd-networkd[1464]: calic95402e24ce: Gained IPv6LL Sep 11 00:26:42.766924 containerd[1558]: time="2025-09-11T00:26:42.766860432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:42.767569 containerd[1558]: time="2025-09-11T00:26:42.767538426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:26:42.768670 containerd[1558]: time="2025-09-11T00:26:42.768635746Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:42.770569 containerd[1558]: time="2025-09-11T00:26:42.770536737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:42.771154 containerd[1558]: time="2025-09-11T00:26:42.771113450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.991497113s" Sep 11 00:26:42.771187 containerd[1558]: time="2025-09-11T00:26:42.771153986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:26:42.773333 containerd[1558]: time="2025-09-11T00:26:42.773289437Z" level=info msg="CreateContainer within sandbox \"413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:26:42.779805 containerd[1558]: time="2025-09-11T00:26:42.779756012Z" level=info msg="Container d0c87e72b96543991f2e9a7f14fe6881b849e2d3ba8d40e188e264882d327835: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:42.791011 containerd[1558]: time="2025-09-11T00:26:42.790963212Z" level=info msg="CreateContainer within sandbox \"413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d0c87e72b96543991f2e9a7f14fe6881b849e2d3ba8d40e188e264882d327835\"" Sep 11 00:26:42.791585 containerd[1558]: time="2025-09-11T00:26:42.791546889Z" level=info msg="StartContainer for \"d0c87e72b96543991f2e9a7f14fe6881b849e2d3ba8d40e188e264882d327835\"" Sep 11 00:26:42.792977 containerd[1558]: time="2025-09-11T00:26:42.792946437Z" level=info msg="connecting to shim d0c87e72b96543991f2e9a7f14fe6881b849e2d3ba8d40e188e264882d327835" address="unix:///run/containerd/s/94fc2a615682e2dda3868db340c240d965935bf910e115b3f360438e43251aa4" protocol=ttrpc version=3 Sep 11 00:26:42.818600 systemd[1]: Started cri-containerd-d0c87e72b96543991f2e9a7f14fe6881b849e2d3ba8d40e188e264882d327835.scope - libcontainer container d0c87e72b96543991f2e9a7f14fe6881b849e2d3ba8d40e188e264882d327835. Sep 11 00:26:42.894716 containerd[1558]: time="2025-09-11T00:26:42.894627494Z" level=info msg="StartContainer for \"d0c87e72b96543991f2e9a7f14fe6881b849e2d3ba8d40e188e264882d327835\" returns successfully" Sep 11 00:26:42.897901 containerd[1558]: time="2025-09-11T00:26:42.897861117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:26:42.916153 kubelet[2726]: E0911 00:26:42.916094 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:42.916767 containerd[1558]: time="2025-09-11T00:26:42.916723024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk9vh,Uid:9e244902-62e4-48ca-b740-012a5f052e01,Namespace:kube-system,Attempt:0,}" Sep 11 00:26:42.917132 containerd[1558]: time="2025-09-11T00:26:42.916972412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795668b449-pdwm4,Uid:3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:43.034728 systemd-networkd[1464]: cali3e64daf78c2: Link UP Sep 11 00:26:43.035777 systemd-networkd[1464]: cali3e64daf78c2: Gained carrier Sep 11 00:26:43.048178 containerd[1558]: 2025-09-11 00:26:42.954 [INFO][4153] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:26:43.048178 containerd[1558]: 2025-09-11 00:26:42.970 [INFO][4153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0 calico-kube-controllers-795668b449- calico-system 3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf 813 0 2025-09-11 00:26:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:795668b449 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-795668b449-pdwm4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3e64daf78c2 [] [] }} ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-" Sep 11 00:26:43.048178 containerd[1558]: 2025-09-11 00:26:42.970 [INFO][4153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" Sep 11 00:26:43.048178 containerd[1558]: 2025-09-11 00:26:42.998 [INFO][4176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" HandleID="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Workload="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:42.998 [INFO][4176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" HandleID="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Workload="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001346c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-795668b449-pdwm4", "timestamp":"2025-09-11 00:26:42.998396493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:42.998 [INFO][4176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:42.998 [INFO][4176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:42.998 [INFO][4176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:43.006 [INFO][4176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" host="localhost" Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:43.010 [INFO][4176] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:43.013 [INFO][4176] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:43.015 [INFO][4176] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:43.017 [INFO][4176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:43.048523 containerd[1558]: 2025-09-11 00:26:43.017 [INFO][4176] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" host="localhost" Sep 11 00:26:43.048920 containerd[1558]: 2025-09-11 00:26:43.018 [INFO][4176] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec Sep 11 00:26:43.048920 containerd[1558]: 2025-09-11 00:26:43.021 [INFO][4176] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" host="localhost" Sep 11 00:26:43.048920 containerd[1558]: 2025-09-11 00:26:43.027 [INFO][4176] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" host="localhost" Sep 11 00:26:43.048920 containerd[1558]: 2025-09-11 00:26:43.027 [INFO][4176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" host="localhost" Sep 11 00:26:43.048920 containerd[1558]: 2025-09-11 00:26:43.027 [INFO][4176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:43.048920 containerd[1558]: 2025-09-11 00:26:43.027 [INFO][4176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" HandleID="k8s-pod-network.ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Workload="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" Sep 11 00:26:43.049139 containerd[1558]: 2025-09-11 00:26:43.031 [INFO][4153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0", GenerateName:"calico-kube-controllers-795668b449-", Namespace:"calico-system", SelfLink:"", UID:"3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795668b449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-795668b449-pdwm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e64daf78c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:43.049227 containerd[1558]: 2025-09-11 00:26:43.031 [INFO][4153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" Sep 11 00:26:43.049227 containerd[1558]: 2025-09-11 00:26:43.031 [INFO][4153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e64daf78c2 ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" Sep 11 00:26:43.049227 containerd[1558]: 2025-09-11 00:26:43.035 [INFO][4153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" Sep 11 00:26:43.049328 containerd[1558]: 2025-09-11 00:26:43.036 [INFO][4153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0", GenerateName:"calico-kube-controllers-795668b449-", Namespace:"calico-system", SelfLink:"", UID:"3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795668b449", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec", Pod:"calico-kube-controllers-795668b449-pdwm4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e64daf78c2", MAC:"9e:d4:27:9b:21:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:43.049403 containerd[1558]: 2025-09-11 00:26:43.044 [INFO][4153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" Namespace="calico-system" Pod="calico-kube-controllers-795668b449-pdwm4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--795668b449--pdwm4-eth0" Sep 11 00:26:43.071634 containerd[1558]: time="2025-09-11T00:26:43.071583333Z" level=info msg="connecting to shim ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec" address="unix:///run/containerd/s/e75b5e0eb0e8b278ee538611080c860e0885c7b207ee080cce3e752fcaf5cc22" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:43.097738 systemd[1]: Started cri-containerd-ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec.scope - libcontainer container ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec. Sep 11 00:26:43.122669 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:44882.service - OpenSSH per-connection server daemon (10.0.0.1:44882). Sep 11 00:26:43.127502 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:43.137848 systemd-networkd[1464]: califa6e65ed987: Link UP Sep 11 00:26:43.138071 systemd-networkd[1464]: califa6e65ed987: Gained carrier Sep 11 00:26:43.155454 containerd[1558]: 2025-09-11 00:26:42.952 [INFO][4142] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:26:43.155454 containerd[1558]: 2025-09-11 00:26:42.965 [INFO][4142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0 coredns-7c65d6cfc9- kube-system 9e244902-62e4-48ca-b740-012a5f052e01 812 0 2025-09-11 00:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dk9vh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa6e65ed987 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-" Sep 11 00:26:43.155454 containerd[1558]: 2025-09-11 00:26:42.965 [INFO][4142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" Sep 11 00:26:43.155454 containerd[1558]: 2025-09-11 00:26:43.000 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" HandleID="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Workload="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.001 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" HandleID="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Workload="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dk9vh", "timestamp":"2025-09-11 00:26:43.000829303 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.001 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.027 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.027 [INFO][4170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.109 [INFO][4170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" host="localhost" Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.112 [INFO][4170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.116 [INFO][4170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.118 [INFO][4170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.119 [INFO][4170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:43.156671 containerd[1558]: 2025-09-11 00:26:43.119 [INFO][4170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" host="localhost" Sep 11 00:26:43.157012 containerd[1558]: 2025-09-11 00:26:43.120 [INFO][4170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b Sep 11 00:26:43.157012 containerd[1558]: 2025-09-11 00:26:43.125 [INFO][4170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" host="localhost" Sep 11 00:26:43.157012 containerd[1558]: 2025-09-11 00:26:43.132 [INFO][4170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" host="localhost" Sep 11 00:26:43.157012 containerd[1558]: 2025-09-11 00:26:43.132 [INFO][4170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" host="localhost" Sep 11 00:26:43.157012 containerd[1558]: 2025-09-11 00:26:43.132 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:43.157012 containerd[1558]: 2025-09-11 00:26:43.132 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" HandleID="k8s-pod-network.80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Workload="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" Sep 11 00:26:43.157145 containerd[1558]: 2025-09-11 00:26:43.135 [INFO][4142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9e244902-62e4-48ca-b740-012a5f052e01", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dk9vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa6e65ed987", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:43.157215 containerd[1558]: 2025-09-11 00:26:43.135 [INFO][4142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" Sep 11 00:26:43.157215 containerd[1558]: 2025-09-11 00:26:43.135 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa6e65ed987 ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" Sep 11 00:26:43.157215 containerd[1558]: 2025-09-11 00:26:43.139 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" Sep 11 00:26:43.157287 containerd[1558]: 2025-09-11 00:26:43.139 [INFO][4142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9e244902-62e4-48ca-b740-012a5f052e01", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b", Pod:"coredns-7c65d6cfc9-dk9vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa6e65ed987", MAC:"26:e6:3e:a3:24:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:43.157287 containerd[1558]: 2025-09-11 00:26:43.150 [INFO][4142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk9vh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dk9vh-eth0" Sep 11 00:26:43.168119 containerd[1558]: time="2025-09-11T00:26:43.168065891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795668b449-pdwm4,Uid:3f5e48d0-599f-43c2-ae6a-e0897ce8a3cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec\"" Sep 11 00:26:43.193714 containerd[1558]: time="2025-09-11T00:26:43.193394289Z" level=info msg="connecting to shim 80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b" address="unix:///run/containerd/s/c2044a88274b898705e70eb8278ac26451f2e99bd8a18b061904ac4eb0664611" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:43.204084 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 44882 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:26:43.206292 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:43.213532 systemd-logind[1535]: New session 8 of user core. Sep 11 00:26:43.224606 systemd[1]: Started cri-containerd-80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b.scope - libcontainer container 80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b. Sep 11 00:26:43.225646 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:26:43.239006 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:43.274175 containerd[1558]: time="2025-09-11T00:26:43.274117501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk9vh,Uid:9e244902-62e4-48ca-b740-012a5f052e01,Namespace:kube-system,Attempt:0,} returns sandbox id \"80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b\"" Sep 11 00:26:43.274854 kubelet[2726]: E0911 00:26:43.274817 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:43.277401 containerd[1558]: time="2025-09-11T00:26:43.277364158Z" level=info msg="CreateContainer within sandbox \"80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:26:43.290546 containerd[1558]: time="2025-09-11T00:26:43.290183714Z" level=info msg="Container 94b045a348f375d7dbc4fcfed6c07930590a06cbbf28038f913e6a3a43532b5a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:43.307751 containerd[1558]: time="2025-09-11T00:26:43.307695001Z" level=info msg="CreateContainer within sandbox \"80de1a45e88928c3e1c7c27a33af7404229a12e3c30a4d6aadd55ae299a2a01b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94b045a348f375d7dbc4fcfed6c07930590a06cbbf28038f913e6a3a43532b5a\"" Sep 11 00:26:43.308730 containerd[1558]: time="2025-09-11T00:26:43.308664442Z" level=info msg="StartContainer for \"94b045a348f375d7dbc4fcfed6c07930590a06cbbf28038f913e6a3a43532b5a\"" Sep 11 00:26:43.313694 containerd[1558]: time="2025-09-11T00:26:43.313643342Z" level=info msg="connecting to shim 94b045a348f375d7dbc4fcfed6c07930590a06cbbf28038f913e6a3a43532b5a" address="unix:///run/containerd/s/c2044a88274b898705e70eb8278ac26451f2e99bd8a18b061904ac4eb0664611" protocol=ttrpc version=3 Sep 11 00:26:43.335561 systemd[1]: Started cri-containerd-94b045a348f375d7dbc4fcfed6c07930590a06cbbf28038f913e6a3a43532b5a.scope - libcontainer container 94b045a348f375d7dbc4fcfed6c07930590a06cbbf28038f913e6a3a43532b5a. Sep 11 00:26:43.372729 containerd[1558]: time="2025-09-11T00:26:43.372689934Z" level=info msg="StartContainer for \"94b045a348f375d7dbc4fcfed6c07930590a06cbbf28038f913e6a3a43532b5a\" returns successfully" Sep 11 00:26:43.400317 sshd[4285]: Connection closed by 10.0.0.1 port 44882 Sep 11 00:26:43.401234 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:43.407160 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:26:43.408126 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:44882.service: Deactivated successfully. Sep 11 00:26:43.410441 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:26:43.412865 systemd-logind[1535]: Removed session 8. Sep 11 00:26:43.916097 containerd[1558]: time="2025-09-11T00:26:43.915978144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-g2bf9,Uid:798a8e8f-61de-4cc6-94c8-5e67c1db36b9,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:26:44.013647 systemd-networkd[1464]: calibc1a4cf6ddf: Link UP Sep 11 00:26:44.014610 systemd-networkd[1464]: calibc1a4cf6ddf: Gained carrier Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.941 [INFO][4367] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.953 [INFO][4367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0 calico-apiserver-5f9847cbbd- calico-apiserver 798a8e8f-61de-4cc6-94c8-5e67c1db36b9 810 0 2025-09-11 00:26:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f9847cbbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f9847cbbd-g2bf9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc1a4cf6ddf [] [] }} ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.953 [INFO][4367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.977 [INFO][4382] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" HandleID="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Workload="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.978 [INFO][4382] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" HandleID="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Workload="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ab20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f9847cbbd-g2bf9", "timestamp":"2025-09-11 00:26:43.977929282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.978 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.978 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.978 [INFO][4382] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.984 [INFO][4382] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.988 [INFO][4382] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.992 [INFO][4382] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.994 [INFO][4382] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.996 [INFO][4382] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.996 [INFO][4382] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:43.997 [INFO][4382] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:44.002 [INFO][4382] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:44.007 [INFO][4382] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:44.007 [INFO][4382] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" host="localhost" Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:44.007 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:44.033592 containerd[1558]: 2025-09-11 00:26:44.007 [INFO][4382] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" HandleID="k8s-pod-network.76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Workload="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" Sep 11 00:26:44.034197 containerd[1558]: 2025-09-11 00:26:44.011 [INFO][4367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0", GenerateName:"calico-apiserver-5f9847cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"798a8e8f-61de-4cc6-94c8-5e67c1db36b9", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9847cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f9847cbbd-g2bf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc1a4cf6ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:44.034197 containerd[1558]: 2025-09-11 00:26:44.011 [INFO][4367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" Sep 11 00:26:44.034197 containerd[1558]: 2025-09-11 00:26:44.011 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc1a4cf6ddf ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" Sep 11 00:26:44.034197 containerd[1558]: 2025-09-11 00:26:44.014 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" Sep 11 00:26:44.034197 containerd[1558]: 2025-09-11 00:26:44.015 [INFO][4367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0", GenerateName:"calico-apiserver-5f9847cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"798a8e8f-61de-4cc6-94c8-5e67c1db36b9", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9847cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da", Pod:"calico-apiserver-5f9847cbbd-g2bf9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc1a4cf6ddf", MAC:"1a:5b:a9:05:c5:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:44.034197 containerd[1558]: 2025-09-11 00:26:44.028 [INFO][4367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-g2bf9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--g2bf9-eth0" Sep 11 00:26:44.052971 containerd[1558]: time="2025-09-11T00:26:44.052918177Z" level=info msg="connecting to shim 76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da" address="unix:///run/containerd/s/01ed98dd0bc1174c42a2ff020b76e6481378d03c42004058703eac2a28c462cc" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:44.066024 kubelet[2726]: E0911 00:26:44.065977 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:44.083670 systemd[1]: Started cri-containerd-76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da.scope - libcontainer container 76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da. Sep 11 00:26:44.103836 kubelet[2726]: I0911 00:26:44.103657 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dk9vh" podStartSLOduration=38.103410336 podStartE2EDuration="38.103410336s" podCreationTimestamp="2025-09-11 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:26:44.087533111 +0000 UTC m=+42.276579782" watchObservedRunningTime="2025-09-11 00:26:44.103410336 +0000 UTC m=+42.292456987" Sep 11 00:26:44.104758 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:44.150335 containerd[1558]: time="2025-09-11T00:26:44.150292135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-g2bf9,Uid:798a8e8f-61de-4cc6-94c8-5e67c1db36b9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da\"" Sep 11 00:26:44.450455 kubelet[2726]: I0911 00:26:44.449782 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:26:44.450455 kubelet[2726]: E0911 00:26:44.450227 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:44.544710 systemd-networkd[1464]: cali3e64daf78c2: Gained IPv6LL Sep 11 00:26:44.736635 systemd-networkd[1464]: califa6e65ed987: Gained IPv6LL Sep 11 00:26:44.970419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156740219.mount: Deactivated successfully. Sep 11 00:26:44.993357 containerd[1558]: time="2025-09-11T00:26:44.993257229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:44.994095 containerd[1558]: time="2025-09-11T00:26:44.994053826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:26:44.995387 containerd[1558]: time="2025-09-11T00:26:44.995348918Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:44.997636 containerd[1558]: time="2025-09-11T00:26:44.997600947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:44.998392 containerd[1558]: time="2025-09-11T00:26:44.998361485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.100446696s" Sep 11 00:26:44.998452 containerd[1558]: time="2025-09-11T00:26:44.998399927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:26:44.999736 containerd[1558]: time="2025-09-11T00:26:44.999705829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:26:45.000627 containerd[1558]: time="2025-09-11T00:26:45.000598285Z" level=info msg="CreateContainer within sandbox \"413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:26:45.009075 containerd[1558]: time="2025-09-11T00:26:45.009029808Z" level=info msg="Container 280fd9cce3ffa2e9a82e96bf57531151e465496f9a06e5f7e3206d3274981970: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:45.018354 containerd[1558]: time="2025-09-11T00:26:45.018300647Z" level=info msg="CreateContainer within sandbox \"413a5aa18b3aaf464968ebdbe5cd8cedadd2a9a6fd36b06e741ca13c00550354\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"280fd9cce3ffa2e9a82e96bf57531151e465496f9a06e5f7e3206d3274981970\"" Sep 11 00:26:45.018818 containerd[1558]: time="2025-09-11T00:26:45.018791258Z" level=info msg="StartContainer for \"280fd9cce3ffa2e9a82e96bf57531151e465496f9a06e5f7e3206d3274981970\"" Sep 11 00:26:45.019822 containerd[1558]: time="2025-09-11T00:26:45.019796185Z" level=info msg="connecting to shim 280fd9cce3ffa2e9a82e96bf57531151e465496f9a06e5f7e3206d3274981970" address="unix:///run/containerd/s/94fc2a615682e2dda3868db340c240d965935bf910e115b3f360438e43251aa4" protocol=ttrpc version=3 Sep 11 00:26:45.046558 systemd[1]: Started cri-containerd-280fd9cce3ffa2e9a82e96bf57531151e465496f9a06e5f7e3206d3274981970.scope - libcontainer container 280fd9cce3ffa2e9a82e96bf57531151e465496f9a06e5f7e3206d3274981970. Sep 11 00:26:45.073211 kubelet[2726]: E0911 00:26:45.073163 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:45.073900 kubelet[2726]: E0911 00:26:45.073856 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:45.101636 containerd[1558]: time="2025-09-11T00:26:45.101598661Z" level=info msg="StartContainer for \"280fd9cce3ffa2e9a82e96bf57531151e465496f9a06e5f7e3206d3274981970\" returns successfully" Sep 11 00:26:45.311637 systemd-networkd[1464]: calibc1a4cf6ddf: Gained IPv6LL Sep 11 00:26:45.582205 systemd-networkd[1464]: vxlan.calico: Link UP Sep 11 00:26:45.583609 systemd-networkd[1464]: vxlan.calico: Gained carrier Sep 11 00:26:45.916743 kubelet[2726]: E0911 00:26:45.916607 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:45.917857 containerd[1558]: time="2025-09-11T00:26:45.917804743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gjlvv,Uid:a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:45.918380 containerd[1558]: time="2025-09-11T00:26:45.918342843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fj6sx,Uid:65b0dc7b-51bf-4c45-8124-dc6f83a69633,Namespace:calico-system,Attempt:0,}" Sep 11 00:26:45.918560 containerd[1558]: time="2025-09-11T00:26:45.918516930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-zvw28,Uid:5d175666-b65d-40a4-967c-8028874965d4,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:26:45.918718 containerd[1558]: time="2025-09-11T00:26:45.918683963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7kwsx,Uid:84a8df4f-888e-427e-a9f1-99af2a5ac3e2,Namespace:kube-system,Attempt:0,}" Sep 11 00:26:46.082185 kubelet[2726]: E0911 00:26:46.082148 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:46.095398 systemd-networkd[1464]: cali74f84d6e7c8: Link UP Sep 11 00:26:46.096972 systemd-networkd[1464]: cali74f84d6e7c8: Gained carrier Sep 11 00:26:46.109015 kubelet[2726]: I0911 00:26:46.108291 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-75478f78c8-td8h7" podStartSLOduration=1.887953087 podStartE2EDuration="7.108230764s" podCreationTimestamp="2025-09-11 00:26:39 +0000 UTC" firstStartedPulling="2025-09-11 00:26:39.779162284 +0000 UTC m=+37.968208935" lastFinishedPulling="2025-09-11 00:26:44.99943996 +0000 UTC m=+43.188486612" observedRunningTime="2025-09-11 00:26:46.099075804 +0000 UTC m=+44.288122455" watchObservedRunningTime="2025-09-11 00:26:46.108230764 +0000 UTC m=+44.297277415" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:45.994 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--gjlvv-eth0 goldmane-7988f88666- calico-system a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e 809 0 2025-09-11 00:26:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-gjlvv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali74f84d6e7c8 [] [] }} ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:45.994 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.042 [INFO][4692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" HandleID="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Workload="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.043 [INFO][4692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" HandleID="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Workload="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6f00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-gjlvv", "timestamp":"2025-09-11 00:26:46.042795981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.043 [INFO][4692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.043 [INFO][4692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.043 [INFO][4692] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.050 [INFO][4692] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.054 [INFO][4692] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.059 [INFO][4692] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.063 [INFO][4692] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.068 [INFO][4692] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.068 [INFO][4692] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.070 [INFO][4692] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.075 [INFO][4692] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.087 [INFO][4692] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.087 [INFO][4692] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" host="localhost" Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.087 [INFO][4692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:46.114023 containerd[1558]: 2025-09-11 00:26:46.087 [INFO][4692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" HandleID="k8s-pod-network.0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Workload="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" Sep 11 00:26:46.114923 containerd[1558]: 2025-09-11 00:26:46.091 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gjlvv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-gjlvv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74f84d6e7c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.114923 containerd[1558]: 2025-09-11 00:26:46.091 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" Sep 11 00:26:46.114923 containerd[1558]: 2025-09-11 00:26:46.091 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74f84d6e7c8 ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" Sep 11 00:26:46.114923 containerd[1558]: 2025-09-11 00:26:46.098 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" Sep 11 00:26:46.114923 containerd[1558]: 2025-09-11 00:26:46.098 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--gjlvv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b", Pod:"goldmane-7988f88666-gjlvv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74f84d6e7c8", MAC:"e6:da:b1:83:c6:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.114923 containerd[1558]: 2025-09-11 00:26:46.107 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" Namespace="calico-system" Pod="goldmane-7988f88666-gjlvv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--gjlvv-eth0" Sep 11 00:26:46.306895 systemd-networkd[1464]: califc6dbcc8ef8: Link UP Sep 11 00:26:46.309763 systemd-networkd[1464]: califc6dbcc8ef8: Gained carrier Sep 11 00:26:46.330175 containerd[1558]: time="2025-09-11T00:26:46.330084394Z" level=info msg="connecting to shim 0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b" address="unix:///run/containerd/s/d290f5f5a72f62bcb3c6eb82ec1119307833409647d8cf043c31763fa7d4da37" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.016 [INFO][4632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0 coredns-7c65d6cfc9- kube-system 84a8df4f-888e-427e-a9f1-99af2a5ac3e2 804 0 2025-09-11 00:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7kwsx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc6dbcc8ef8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.018 [INFO][4632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.071 [INFO][4705] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" HandleID="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Workload="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.072 [INFO][4705] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" HandleID="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Workload="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051a7c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7kwsx", "timestamp":"2025-09-11 00:26:46.071681007 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.072 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.087 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.088 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.272 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.277 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.282 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.284 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.286 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.286 [INFO][4705] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.288 [INFO][4705] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29 Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.292 [INFO][4705] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.298 [INFO][4705] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.298 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" host="localhost" Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.298 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:46.332157 containerd[1558]: 2025-09-11 00:26:46.298 [INFO][4705] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" HandleID="k8s-pod-network.f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Workload="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" Sep 11 00:26:46.333062 containerd[1558]: 2025-09-11 00:26:46.301 [INFO][4632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"84a8df4f-888e-427e-a9f1-99af2a5ac3e2", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7kwsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6dbcc8ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.333062 containerd[1558]: 2025-09-11 00:26:46.301 [INFO][4632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" Sep 11 00:26:46.333062 containerd[1558]: 2025-09-11 00:26:46.301 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc6dbcc8ef8 ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" Sep 11 00:26:46.333062 containerd[1558]: 2025-09-11 00:26:46.307 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" Sep 11 00:26:46.333062 containerd[1558]: 2025-09-11 00:26:46.309 [INFO][4632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"84a8df4f-888e-427e-a9f1-99af2a5ac3e2", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29", Pod:"coredns-7c65d6cfc9-7kwsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6dbcc8ef8", MAC:"76:2c:0e:09:ea:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.333062 containerd[1558]: 2025-09-11 00:26:46.325 [INFO][4632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7kwsx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7kwsx-eth0" Sep 11 00:26:46.368418 containerd[1558]: time="2025-09-11T00:26:46.368352348Z" level=info msg="connecting to shim f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29" address="unix:///run/containerd/s/3e019aa09707fe7d2c1d885188ca79432e32e076e3449f2953bbcea7cc779eb6" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:46.380853 systemd[1]: Started cri-containerd-0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b.scope - libcontainer container 0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b. Sep 11 00:26:46.406602 systemd[1]: Started cri-containerd-f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29.scope - libcontainer container f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29. Sep 11 00:26:46.411941 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:46.425404 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:46.428084 systemd-networkd[1464]: cali19c9d6af40c: Link UP Sep 11 00:26:46.428412 systemd-networkd[1464]: cali19c9d6af40c: Gained carrier Sep 11 00:26:46.462755 containerd[1558]: time="2025-09-11T00:26:46.462698064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gjlvv,Uid:a3e3f494-7c5a-4ad7-9bfc-e9b4b133bf7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b\"" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.014 [INFO][4641] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fj6sx-eth0 csi-node-driver- calico-system 65b0dc7b-51bf-4c45-8124-dc6f83a69633 700 0 2025-09-11 00:26:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fj6sx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali19c9d6af40c [] [] }} ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.014 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-eth0" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.072 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" HandleID="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Workload="localhost-k8s-csi--node--driver--fj6sx-eth0" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.073 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" HandleID="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Workload="localhost-k8s-csi--node--driver--fj6sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fj6sx", "timestamp":"2025-09-11 00:26:46.072856304 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.073 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.298 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.299 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.355 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.379 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.386 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.390 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.396 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.396 [INFO][4703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.399 [INFO][4703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1 Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.406 [INFO][4703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.415 [INFO][4703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.415 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" host="localhost" Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.418 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:46.464867 containerd[1558]: 2025-09-11 00:26:46.418 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" HandleID="k8s-pod-network.be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Workload="localhost-k8s-csi--node--driver--fj6sx-eth0" Sep 11 00:26:46.465598 containerd[1558]: 2025-09-11 00:26:46.423 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fj6sx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"65b0dc7b-51bf-4c45-8124-dc6f83a69633", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fj6sx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19c9d6af40c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.465598 containerd[1558]: 2025-09-11 00:26:46.424 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-eth0" Sep 11 00:26:46.465598 containerd[1558]: 2025-09-11 00:26:46.424 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19c9d6af40c ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-eth0" Sep 11 00:26:46.465598 containerd[1558]: 2025-09-11 00:26:46.429 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-eth0" Sep 11 00:26:46.465598 containerd[1558]: 2025-09-11 00:26:46.429 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fj6sx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"65b0dc7b-51bf-4c45-8124-dc6f83a69633", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1", Pod:"csi-node-driver-fj6sx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19c9d6af40c", MAC:"6e:06:48:a7:3b:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.465598 containerd[1558]: 2025-09-11 00:26:46.451 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" Namespace="calico-system" Pod="csi-node-driver-fj6sx" WorkloadEndpoint="localhost-k8s-csi--node--driver--fj6sx-eth0" Sep 11 00:26:46.472344 containerd[1558]: time="2025-09-11T00:26:46.472299663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7kwsx,Uid:84a8df4f-888e-427e-a9f1-99af2a5ac3e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29\"" Sep 11 00:26:46.473437 kubelet[2726]: E0911 00:26:46.473394 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:46.476324 containerd[1558]: time="2025-09-11T00:26:46.476026701Z" level=info msg="CreateContainer within sandbox \"f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:26:46.500705 containerd[1558]: time="2025-09-11T00:26:46.500657518Z" level=info msg="Container 3a315c552d82babeadf53b25fdb171dc7436f1a41e339cd057151d60b94b251d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:46.509695 containerd[1558]: time="2025-09-11T00:26:46.509636197Z" level=info msg="connecting to shim be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1" address="unix:///run/containerd/s/641f82053878d2754b00fd6d5b5d63f0c11ad445c85ba993895ed69b65d0d081" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:46.527545 containerd[1558]: time="2025-09-11T00:26:46.527419738Z" level=info msg="CreateContainer within sandbox \"f5377dd3212214e6556f4e61eedb47aa0ad48d9a914ce85abe555e8d24c7ee29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a315c552d82babeadf53b25fdb171dc7436f1a41e339cd057151d60b94b251d\"" Sep 11 00:26:46.529567 containerd[1558]: time="2025-09-11T00:26:46.529533196Z" level=info msg="StartContainer for \"3a315c552d82babeadf53b25fdb171dc7436f1a41e339cd057151d60b94b251d\"" Sep 11 00:26:46.531098 containerd[1558]: time="2025-09-11T00:26:46.531072607Z" level=info msg="connecting to shim 3a315c552d82babeadf53b25fdb171dc7436f1a41e339cd057151d60b94b251d" address="unix:///run/containerd/s/3e019aa09707fe7d2c1d885188ca79432e32e076e3449f2953bbcea7cc779eb6" protocol=ttrpc version=3 Sep 11 00:26:46.547236 systemd-networkd[1464]: calif8a1f380f53: Link UP Sep 11 00:26:46.550513 systemd-networkd[1464]: calif8a1f380f53: Gained carrier Sep 11 00:26:46.553701 systemd[1]: Started cri-containerd-be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1.scope - libcontainer container be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1. Sep 11 00:26:46.585773 systemd[1]: Started cri-containerd-3a315c552d82babeadf53b25fdb171dc7436f1a41e339cd057151d60b94b251d.scope - libcontainer container 3a315c552d82babeadf53b25fdb171dc7436f1a41e339cd057151d60b94b251d. Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.022 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0 calico-apiserver-5f9847cbbd- calico-apiserver 5d175666-b65d-40a4-967c-8028874965d4 814 0 2025-09-11 00:26:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f9847cbbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f9847cbbd-zvw28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8a1f380f53 [] [] }} ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.023 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.084 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" HandleID="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Workload="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.084 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" HandleID="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Workload="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f9847cbbd-zvw28", "timestamp":"2025-09-11 00:26:46.084540502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.085 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.415 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.415 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.455 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.479 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.486 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.489 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.493 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.493 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.496 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7 Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.502 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.516 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.516 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" host="localhost" Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.516 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:26:46.595562 containerd[1558]: 2025-09-11 00:26:46.518 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" HandleID="k8s-pod-network.87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Workload="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" Sep 11 00:26:46.596315 containerd[1558]: 2025-09-11 00:26:46.532 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0", GenerateName:"calico-apiserver-5f9847cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d175666-b65d-40a4-967c-8028874965d4", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9847cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f9847cbbd-zvw28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8a1f380f53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.596315 containerd[1558]: 2025-09-11 00:26:46.533 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" Sep 11 00:26:46.596315 containerd[1558]: 2025-09-11 00:26:46.533 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8a1f380f53 ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" Sep 11 00:26:46.596315 containerd[1558]: 2025-09-11 00:26:46.552 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" Sep 11 00:26:46.596315 containerd[1558]: 2025-09-11 00:26:46.553 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0", GenerateName:"calico-apiserver-5f9847cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d175666-b65d-40a4-967c-8028874965d4", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f9847cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7", Pod:"calico-apiserver-5f9847cbbd-zvw28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8a1f380f53", MAC:"fe:0d:6c:ff:6e:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:26:46.596315 containerd[1558]: 2025-09-11 00:26:46.580 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-5f9847cbbd-zvw28" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f9847cbbd--zvw28-eth0" Sep 11 00:26:46.604010 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:46.655943 systemd-networkd[1464]: vxlan.calico: Gained IPv6LL Sep 11 00:26:46.927895 containerd[1558]: time="2025-09-11T00:26:46.927755683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fj6sx,Uid:65b0dc7b-51bf-4c45-8124-dc6f83a69633,Namespace:calico-system,Attempt:0,} returns sandbox id \"be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1\"" Sep 11 00:26:46.929460 containerd[1558]: time="2025-09-11T00:26:46.929383670Z" level=info msg="StartContainer for \"3a315c552d82babeadf53b25fdb171dc7436f1a41e339cd057151d60b94b251d\" returns successfully" Sep 11 00:26:46.942495 containerd[1558]: time="2025-09-11T00:26:46.942440245Z" level=info msg="connecting to shim 87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7" address="unix:///run/containerd/s/2cb5107e738ec2fc9533f3b2c9ebfaf21b0ba1fbe1fde5f99575bc0355571caa" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:26:46.974810 systemd[1]: Started cri-containerd-87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7.scope - libcontainer container 87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7. Sep 11 00:26:46.998468 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:26:47.088448 kubelet[2726]: E0911 00:26:47.088370 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:47.175456 kubelet[2726]: I0911 00:26:47.175380 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7kwsx" podStartSLOduration=41.175245638 podStartE2EDuration="41.175245638s" podCreationTimestamp="2025-09-11 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:26:47.173538623 +0000 UTC m=+45.362585274" watchObservedRunningTime="2025-09-11 00:26:47.175245638 +0000 UTC m=+45.364292289" Sep 11 00:26:47.176252 containerd[1558]: time="2025-09-11T00:26:47.175759543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f9847cbbd-zvw28,Uid:5d175666-b65d-40a4-967c-8028874965d4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7\"" Sep 11 00:26:47.504273 containerd[1558]: time="2025-09-11T00:26:47.504226154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:47.505237 containerd[1558]: time="2025-09-11T00:26:47.505199292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:26:47.506420 containerd[1558]: time="2025-09-11T00:26:47.506382252Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:47.508867 containerd[1558]: time="2025-09-11T00:26:47.508824578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:47.509391 containerd[1558]: time="2025-09-11T00:26:47.509336589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.509596125s" Sep 11 00:26:47.509391 containerd[1558]: time="2025-09-11T00:26:47.509380842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:26:47.510390 containerd[1558]: time="2025-09-11T00:26:47.510359039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:26:47.522588 containerd[1558]: time="2025-09-11T00:26:47.522550609Z" level=info msg="CreateContainer within sandbox \"ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:26:47.531078 containerd[1558]: time="2025-09-11T00:26:47.531028115Z" level=info msg="Container ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:47.539918 containerd[1558]: time="2025-09-11T00:26:47.539868894Z" level=info msg="CreateContainer within sandbox \"ea916502ef0f1e53885c6ddcd712d07a5f73b5e698a7e56c256649ef09f705ec\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd\"" Sep 11 00:26:47.540491 containerd[1558]: time="2025-09-11T00:26:47.540460655Z" level=info msg="StartContainer for \"ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd\"" Sep 11 00:26:47.541613 containerd[1558]: time="2025-09-11T00:26:47.541571150Z" level=info msg="connecting to shim ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd" address="unix:///run/containerd/s/e75b5e0eb0e8b278ee538611080c860e0885c7b207ee080cce3e752fcaf5cc22" protocol=ttrpc version=3 Sep 11 00:26:47.561726 systemd[1]: Started cri-containerd-ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd.scope - libcontainer container ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd. Sep 11 00:26:47.616853 containerd[1558]: time="2025-09-11T00:26:47.616809381Z" level=info msg="StartContainer for \"ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd\" returns successfully" Sep 11 00:26:47.680573 systemd-networkd[1464]: cali74f84d6e7c8: Gained IPv6LL Sep 11 00:26:47.743624 systemd-networkd[1464]: califc6dbcc8ef8: Gained IPv6LL Sep 11 00:26:47.744617 systemd-networkd[1464]: calif8a1f380f53: Gained IPv6LL Sep 11 00:26:48.095585 kubelet[2726]: E0911 00:26:48.095541 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:26:48.255729 systemd-networkd[1464]: cali19c9d6af40c: Gained IPv6LL Sep 11 00:26:48.419395 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:44890.service - OpenSSH per-connection server daemon (10.0.0.1:44890). Sep 11 00:26:48.494931 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 44890 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:26:48.497104 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:48.502657 systemd-logind[1535]: New session 9 of user core. Sep 11 00:26:48.507626 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:26:48.658903 sshd[5045]: Connection closed by 10.0.0.1 port 44890 Sep 11 00:26:48.659337 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:48.665608 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:44890.service: Deactivated successfully. Sep 11 00:26:48.668025 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:26:48.669703 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:26:48.671267 systemd-logind[1535]: Removed session 9. Sep 11 00:26:49.244783 containerd[1558]: time="2025-09-11T00:26:49.244724421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd\" id:\"1073794ceb148a525032323ff8fb178cd9e4a220b85e85aaa50e7ca4cb01a7ce\" pid:5075 exited_at:{seconds:1757550409 nanos:244386808}" Sep 11 00:26:49.264393 kubelet[2726]: I0911 00:26:49.264314 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-795668b449-pdwm4" podStartSLOduration=25.924464538 podStartE2EDuration="30.264288218s" podCreationTimestamp="2025-09-11 00:26:19 +0000 UTC" firstStartedPulling="2025-09-11 00:26:43.17030161 +0000 UTC m=+41.359348271" lastFinishedPulling="2025-09-11 00:26:47.5101253 +0000 UTC m=+45.699171951" observedRunningTime="2025-09-11 00:26:48.108687057 +0000 UTC m=+46.297733718" watchObservedRunningTime="2025-09-11 00:26:49.264288218 +0000 UTC m=+47.453334879" Sep 11 00:26:51.028406 containerd[1558]: time="2025-09-11T00:26:51.028231801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4\" id:\"f2ea8f80859f8b57922219e6bda9ff93703977a19499d4d506adc1e367089f92\" pid:5101 exited_at:{seconds:1757550411 nanos:26613623}" Sep 11 00:26:51.172119 containerd[1558]: time="2025-09-11T00:26:51.172055613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4\" id:\"2d45d6f8e7fd93fcf3a2d88f9f57e303953206e876f9ed357e6cdb5166d4d661\" pid:5132 exited_at:{seconds:1757550411 nanos:171627720}" Sep 11 00:26:51.617572 containerd[1558]: time="2025-09-11T00:26:51.617495080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:51.620282 containerd[1558]: time="2025-09-11T00:26:51.620171664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:26:51.641461 containerd[1558]: time="2025-09-11T00:26:51.641374194Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:52.127140 containerd[1558]: time="2025-09-11T00:26:52.127063142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:52.127840 containerd[1558]: time="2025-09-11T00:26:52.127685861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.617297507s" Sep 11 00:26:52.127840 containerd[1558]: time="2025-09-11T00:26:52.127761202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:26:52.136900 containerd[1558]: time="2025-09-11T00:26:52.136738673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:26:52.144463 containerd[1558]: time="2025-09-11T00:26:52.144322117Z" level=info msg="CreateContainer within sandbox \"76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:26:52.817120 containerd[1558]: time="2025-09-11T00:26:52.817043668Z" level=info msg="Container 956d3de45115ab8216cc025f509865f3169eda2537db9fac4d33e9e869bb976e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:52.827737 containerd[1558]: time="2025-09-11T00:26:52.827674152Z" level=info msg="CreateContainer within sandbox \"76b83c281605f9ea6a94686b3daf998d6e799c352143c81bae2e29bf9acd24da\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"956d3de45115ab8216cc025f509865f3169eda2537db9fac4d33e9e869bb976e\"" Sep 11 00:26:52.834751 containerd[1558]: time="2025-09-11T00:26:52.828344039Z" level=info msg="StartContainer for \"956d3de45115ab8216cc025f509865f3169eda2537db9fac4d33e9e869bb976e\"" Sep 11 00:26:52.836323 containerd[1558]: time="2025-09-11T00:26:52.836278953Z" level=info msg="connecting to shim 956d3de45115ab8216cc025f509865f3169eda2537db9fac4d33e9e869bb976e" address="unix:///run/containerd/s/01ed98dd0bc1174c42a2ff020b76e6481378d03c42004058703eac2a28c462cc" protocol=ttrpc version=3 Sep 11 00:26:52.872592 systemd[1]: Started cri-containerd-956d3de45115ab8216cc025f509865f3169eda2537db9fac4d33e9e869bb976e.scope - libcontainer container 956d3de45115ab8216cc025f509865f3169eda2537db9fac4d33e9e869bb976e. Sep 11 00:26:52.997713 containerd[1558]: time="2025-09-11T00:26:52.997660449Z" level=info msg="StartContainer for \"956d3de45115ab8216cc025f509865f3169eda2537db9fac4d33e9e869bb976e\" returns successfully" Sep 11 00:26:53.368791 kubelet[2726]: I0911 00:26:53.368619 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f9847cbbd-g2bf9" podStartSLOduration=29.385252833 podStartE2EDuration="37.368580644s" podCreationTimestamp="2025-09-11 00:26:16 +0000 UTC" firstStartedPulling="2025-09-11 00:26:44.152113906 +0000 UTC m=+42.341160558" lastFinishedPulling="2025-09-11 00:26:52.135441708 +0000 UTC m=+50.324488369" observedRunningTime="2025-09-11 00:26:53.3683973 +0000 UTC m=+51.557443951" watchObservedRunningTime="2025-09-11 00:26:53.368580644 +0000 UTC m=+51.557627295" Sep 11 00:26:53.676039 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:50020.service - OpenSSH per-connection server daemon (10.0.0.1:50020). Sep 11 00:26:53.741189 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 50020 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:26:53.743106 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:53.747924 systemd-logind[1535]: New session 10 of user core. Sep 11 00:26:53.758585 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:26:54.020828 sshd[5194]: Connection closed by 10.0.0.1 port 50020 Sep 11 00:26:54.021179 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:54.032613 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:50020.service: Deactivated successfully. Sep 11 00:26:54.034818 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:26:54.035643 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:26:54.040490 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:50028.service - OpenSSH per-connection server daemon (10.0.0.1:50028). Sep 11 00:26:54.042546 systemd-logind[1535]: Removed session 10. Sep 11 00:26:54.090975 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 50028 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:26:54.093034 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:54.100768 systemd-logind[1535]: New session 11 of user core. Sep 11 00:26:54.103767 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:26:54.118199 kubelet[2726]: I0911 00:26:54.118160 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:26:54.328562 sshd[5212]: Connection closed by 10.0.0.1 port 50028 Sep 11 00:26:54.329626 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:54.340166 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:50028.service: Deactivated successfully. Sep 11 00:26:54.344492 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:26:54.347572 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:26:54.354754 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). Sep 11 00:26:54.355542 systemd-logind[1535]: Removed session 11. Sep 11 00:26:54.405173 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:26:54.406498 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:54.411015 systemd-logind[1535]: New session 12 of user core. Sep 11 00:26:54.417558 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:26:54.537604 sshd[5227]: Connection closed by 10.0.0.1 port 50030 Sep 11 00:26:54.537966 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:54.542952 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:50030.service: Deactivated successfully. Sep 11 00:26:54.544971 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:26:54.545808 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:26:54.546940 systemd-logind[1535]: Removed session 12. Sep 11 00:26:55.393157 kubelet[2726]: I0911 00:26:55.393096 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:26:55.851978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76715430.mount: Deactivated successfully. Sep 11 00:26:56.866886 containerd[1558]: time="2025-09-11T00:26:56.866815813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:56.919330 containerd[1558]: time="2025-09-11T00:26:56.919247291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:26:56.973021 containerd[1558]: time="2025-09-11T00:26:56.972961347Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:57.045554 containerd[1558]: time="2025-09-11T00:26:57.045506832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:57.046310 containerd[1558]: time="2025-09-11T00:26:57.046266347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.909468894s" Sep 11 00:26:57.046310 containerd[1558]: time="2025-09-11T00:26:57.046303807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:26:57.058264 containerd[1558]: time="2025-09-11T00:26:57.058146823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:26:57.090186 containerd[1558]: time="2025-09-11T00:26:57.090129754Z" level=info msg="CreateContainer within sandbox \"0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:26:57.342478 containerd[1558]: time="2025-09-11T00:26:57.342414918Z" level=info msg="Container 14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:57.351599 containerd[1558]: time="2025-09-11T00:26:57.351554800Z" level=info msg="CreateContainer within sandbox \"0cc3be49e000bc1b904f22b7accac2700f25c1350678ba55748d8fcb1853f60b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\"" Sep 11 00:26:57.353027 containerd[1558]: time="2025-09-11T00:26:57.353001064Z" level=info msg="StartContainer for \"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\"" Sep 11 00:26:57.354130 containerd[1558]: time="2025-09-11T00:26:57.354075390Z" level=info msg="connecting to shim 14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e" address="unix:///run/containerd/s/d290f5f5a72f62bcb3c6eb82ec1119307833409647d8cf043c31763fa7d4da37" protocol=ttrpc version=3 Sep 11 00:26:57.389566 systemd[1]: Started cri-containerd-14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e.scope - libcontainer container 14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e. Sep 11 00:26:57.439294 containerd[1558]: time="2025-09-11T00:26:57.439250505Z" level=info msg="StartContainer for \"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\" returns successfully" Sep 11 00:26:58.145130 kubelet[2726]: I0911 00:26:58.144860 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-gjlvv" podStartSLOduration=29.552590525 podStartE2EDuration="40.144839446s" podCreationTimestamp="2025-09-11 00:26:18 +0000 UTC" firstStartedPulling="2025-09-11 00:26:46.465636902 +0000 UTC m=+44.654683543" lastFinishedPulling="2025-09-11 00:26:57.057885813 +0000 UTC m=+55.246932464" observedRunningTime="2025-09-11 00:26:58.144269597 +0000 UTC m=+56.333316258" watchObservedRunningTime="2025-09-11 00:26:58.144839446 +0000 UTC m=+56.333886097" Sep 11 00:26:58.212404 containerd[1558]: time="2025-09-11T00:26:58.212358122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\" id:\"e49be1d73730543fb237103d97fa04baf8aa8d6dd9d1588743d5af1f7afc433b\" pid:5309 exit_status:1 exited_at:{seconds:1757550418 nanos:211923707}" Sep 11 00:26:58.674820 containerd[1558]: time="2025-09-11T00:26:58.674754566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:58.675512 containerd[1558]: time="2025-09-11T00:26:58.675484825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:26:58.676749 containerd[1558]: time="2025-09-11T00:26:58.676719283Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:58.678951 containerd[1558]: time="2025-09-11T00:26:58.678920995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:58.679677 containerd[1558]: time="2025-09-11T00:26:58.679622791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.621436424s" Sep 11 00:26:58.679725 containerd[1558]: time="2025-09-11T00:26:58.679678866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:26:58.680955 containerd[1558]: time="2025-09-11T00:26:58.680653225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:26:58.681960 containerd[1558]: time="2025-09-11T00:26:58.681926013Z" level=info msg="CreateContainer within sandbox \"be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:26:58.694039 containerd[1558]: time="2025-09-11T00:26:58.693993640Z" level=info msg="Container a330c8cc757ca3d614d37ab3ed147ffbf2704f515fc00655f92db895cf2d62da: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:58.704754 containerd[1558]: time="2025-09-11T00:26:58.704655577Z" level=info msg="CreateContainer within sandbox \"be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a330c8cc757ca3d614d37ab3ed147ffbf2704f515fc00655f92db895cf2d62da\"" Sep 11 00:26:58.705373 containerd[1558]: time="2025-09-11T00:26:58.705332318Z" level=info msg="StartContainer for \"a330c8cc757ca3d614d37ab3ed147ffbf2704f515fc00655f92db895cf2d62da\"" Sep 11 00:26:58.707255 containerd[1558]: time="2025-09-11T00:26:58.707216173Z" level=info msg="connecting to shim a330c8cc757ca3d614d37ab3ed147ffbf2704f515fc00655f92db895cf2d62da" address="unix:///run/containerd/s/641f82053878d2754b00fd6d5b5d63f0c11ad445c85ba993895ed69b65d0d081" protocol=ttrpc version=3 Sep 11 00:26:58.748666 systemd[1]: Started cri-containerd-a330c8cc757ca3d614d37ab3ed147ffbf2704f515fc00655f92db895cf2d62da.scope - libcontainer container a330c8cc757ca3d614d37ab3ed147ffbf2704f515fc00655f92db895cf2d62da. Sep 11 00:26:58.800033 containerd[1558]: time="2025-09-11T00:26:58.799987918Z" level=info msg="StartContainer for \"a330c8cc757ca3d614d37ab3ed147ffbf2704f515fc00655f92db895cf2d62da\" returns successfully" Sep 11 00:26:59.162590 containerd[1558]: time="2025-09-11T00:26:59.162519887Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:26:59.166206 containerd[1558]: time="2025-09-11T00:26:59.166165989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:26:59.168215 containerd[1558]: time="2025-09-11T00:26:59.168173948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 487.489104ms" Sep 11 00:26:59.168263 containerd[1558]: time="2025-09-11T00:26:59.168227598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:26:59.170722 containerd[1558]: time="2025-09-11T00:26:59.170676975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:26:59.172304 containerd[1558]: time="2025-09-11T00:26:59.172275875Z" level=info msg="CreateContainer within sandbox \"87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:26:59.188982 containerd[1558]: time="2025-09-11T00:26:59.188913256Z" level=info msg="Container 24b836cdfae21b813fc1fc481c3a6b325cfd8613b0d526c487aad95a97ff706b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:26:59.195071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618609839.mount: Deactivated successfully. Sep 11 00:26:59.199851 containerd[1558]: time="2025-09-11T00:26:59.199810805Z" level=info msg="CreateContainer within sandbox \"87d22b7b3166003a85c3b43cc07ed768087c1eeb7ed6f6f1d48164d7e006c7b7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"24b836cdfae21b813fc1fc481c3a6b325cfd8613b0d526c487aad95a97ff706b\"" Sep 11 00:26:59.201805 containerd[1558]: time="2025-09-11T00:26:59.201733574Z" level=info msg="StartContainer for \"24b836cdfae21b813fc1fc481c3a6b325cfd8613b0d526c487aad95a97ff706b\"" Sep 11 00:26:59.203216 containerd[1558]: time="2025-09-11T00:26:59.203167935Z" level=info msg="connecting to shim 24b836cdfae21b813fc1fc481c3a6b325cfd8613b0d526c487aad95a97ff706b" address="unix:///run/containerd/s/2cb5107e738ec2fc9533f3b2c9ebfaf21b0ba1fbe1fde5f99575bc0355571caa" protocol=ttrpc version=3 Sep 11 00:26:59.230614 systemd[1]: Started cri-containerd-24b836cdfae21b813fc1fc481c3a6b325cfd8613b0d526c487aad95a97ff706b.scope - libcontainer container 24b836cdfae21b813fc1fc481c3a6b325cfd8613b0d526c487aad95a97ff706b. Sep 11 00:26:59.234622 containerd[1558]: time="2025-09-11T00:26:59.234564071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\" id:\"5cdf9ad0b6546c7508361b0f7002402a255b3b394bb58a0893e15207e4fb53ef\" pid:5368 exit_status:1 exited_at:{seconds:1757550419 nanos:234238651}" Sep 11 00:26:59.286923 containerd[1558]: time="2025-09-11T00:26:59.286882760Z" level=info msg="StartContainer for \"24b836cdfae21b813fc1fc481c3a6b325cfd8613b0d526c487aad95a97ff706b\" returns successfully" Sep 11 00:26:59.550465 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:50046.service - OpenSSH per-connection server daemon (10.0.0.1:50046). Sep 11 00:26:59.712031 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 50046 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:26:59.714126 sshd-session[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:26:59.719097 systemd-logind[1535]: New session 13 of user core. Sep 11 00:26:59.725625 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:26:59.941124 sshd[5421]: Connection closed by 10.0.0.1 port 50046 Sep 11 00:26:59.942679 sshd-session[5418]: pam_unix(sshd:session): session closed for user core Sep 11 00:26:59.947872 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:50046.service: Deactivated successfully. Sep 11 00:26:59.950548 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:26:59.952106 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:26:59.953362 systemd-logind[1535]: Removed session 13. Sep 11 00:27:01.141226 kubelet[2726]: I0911 00:27:01.141171 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:27:01.754396 containerd[1558]: time="2025-09-11T00:27:01.754333260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd\" id:\"f102f85ea95c62352ea09bf1df4fb79b4d6333a940c4febf73e7a390d02b95ea\" pid:5451 exited_at:{seconds:1757550421 nanos:754107844}" Sep 11 00:27:02.084821 containerd[1558]: time="2025-09-11T00:27:02.084686714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:02.085929 containerd[1558]: time="2025-09-11T00:27:02.085875974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:27:02.087019 containerd[1558]: time="2025-09-11T00:27:02.086981644Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:02.089129 containerd[1558]: time="2025-09-11T00:27:02.089083540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:27:02.089698 containerd[1558]: time="2025-09-11T00:27:02.089646299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.918915241s" Sep 11 00:27:02.089769 containerd[1558]: time="2025-09-11T00:27:02.089701976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:27:02.098506 containerd[1558]: time="2025-09-11T00:27:02.098415524Z" level=info msg="CreateContainer within sandbox \"be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:27:02.107458 containerd[1558]: time="2025-09-11T00:27:02.106505555Z" level=info msg="Container 52868c9f554f973dc9ed7616318e446e64d6dd7a76d4892b8109836ede6e936c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:27:02.116044 containerd[1558]: time="2025-09-11T00:27:02.115996486Z" level=info msg="CreateContainer within sandbox \"be6bd28b872623f3cdc8c6595ad2f1652f45d58ed6af7f57b110a66333a3e9b1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"52868c9f554f973dc9ed7616318e446e64d6dd7a76d4892b8109836ede6e936c\"" Sep 11 00:27:02.116455 containerd[1558]: time="2025-09-11T00:27:02.116401230Z" level=info msg="StartContainer for \"52868c9f554f973dc9ed7616318e446e64d6dd7a76d4892b8109836ede6e936c\"" Sep 11 00:27:02.117703 containerd[1558]: time="2025-09-11T00:27:02.117676036Z" level=info msg="connecting to shim 52868c9f554f973dc9ed7616318e446e64d6dd7a76d4892b8109836ede6e936c" address="unix:///run/containerd/s/641f82053878d2754b00fd6d5b5d63f0c11ad445c85ba993895ed69b65d0d081" protocol=ttrpc version=3 Sep 11 00:27:02.138740 systemd[1]: Started cri-containerd-52868c9f554f973dc9ed7616318e446e64d6dd7a76d4892b8109836ede6e936c.scope - libcontainer container 52868c9f554f973dc9ed7616318e446e64d6dd7a76d4892b8109836ede6e936c. Sep 11 00:27:02.291574 containerd[1558]: time="2025-09-11T00:27:02.291509228Z" level=info msg="StartContainer for \"52868c9f554f973dc9ed7616318e446e64d6dd7a76d4892b8109836ede6e936c\" returns successfully" Sep 11 00:27:02.981513 kubelet[2726]: I0911 00:27:02.981459 2726 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:27:02.981513 kubelet[2726]: I0911 00:27:02.981512 2726 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:27:03.171552 kubelet[2726]: I0911 00:27:03.171464 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fj6sx" podStartSLOduration=29.010321049 podStartE2EDuration="44.17144274s" podCreationTimestamp="2025-09-11 00:26:19 +0000 UTC" firstStartedPulling="2025-09-11 00:26:46.929387026 +0000 UTC m=+45.118433677" lastFinishedPulling="2025-09-11 00:27:02.090508717 +0000 UTC m=+60.279555368" observedRunningTime="2025-09-11 00:27:03.169721743 +0000 UTC m=+61.358768404" watchObservedRunningTime="2025-09-11 00:27:03.17144274 +0000 UTC m=+61.360489391" Sep 11 00:27:03.171891 kubelet[2726]: I0911 00:27:03.171732 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f9847cbbd-zvw28" podStartSLOduration=35.179582462 podStartE2EDuration="47.171726698s" podCreationTimestamp="2025-09-11 00:26:16 +0000 UTC" firstStartedPulling="2025-09-11 00:26:47.177991043 +0000 UTC m=+45.367037694" lastFinishedPulling="2025-09-11 00:26:59.170135279 +0000 UTC m=+57.359181930" observedRunningTime="2025-09-11 00:27:00.405109017 +0000 UTC m=+58.594155678" watchObservedRunningTime="2025-09-11 00:27:03.171726698 +0000 UTC m=+61.360773359" Sep 11 00:27:04.958052 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:60304.service - OpenSSH per-connection server daemon (10.0.0.1:60304). Sep 11 00:27:05.020312 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 60304 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:05.022200 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:05.026355 systemd-logind[1535]: New session 14 of user core. Sep 11 00:27:05.034546 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:27:05.260551 sshd[5504]: Connection closed by 10.0.0.1 port 60304 Sep 11 00:27:05.261292 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:05.269941 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:60304.service: Deactivated successfully. Sep 11 00:27:05.272482 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:27:05.273627 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:27:05.275533 systemd-logind[1535]: Removed session 14. Sep 11 00:27:08.822829 kubelet[2726]: I0911 00:27:08.822742 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:27:10.273524 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:33112.service - OpenSSH per-connection server daemon (10.0.0.1:33112). Sep 11 00:27:10.345289 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 33112 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:10.347094 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:10.352316 systemd-logind[1535]: New session 15 of user core. Sep 11 00:27:10.360587 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:27:10.528388 sshd[5534]: Connection closed by 10.0.0.1 port 33112 Sep 11 00:27:10.528714 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:10.534667 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:33112.service: Deactivated successfully. Sep 11 00:27:10.536902 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:27:10.537847 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:27:10.539334 systemd-logind[1535]: Removed session 15. Sep 11 00:27:11.378814 containerd[1558]: time="2025-09-11T00:27:11.378636596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\" id:\"f033d756a8f324f1126279513e1e709731c0f4888403a90f0db607b950b32b3e\" pid:5558 exited_at:{seconds:1757550431 nanos:378118431}" Sep 11 00:27:13.673622 containerd[1558]: time="2025-09-11T00:27:13.673546522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd\" id:\"e08241319fc7055596d3bd25fb26dee569b4b877374d81b0070cb2772fb71133\" pid:5583 exited_at:{seconds:1757550433 nanos:673352960}" Sep 11 00:27:15.551639 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:33128.service - OpenSSH per-connection server daemon (10.0.0.1:33128). Sep 11 00:27:15.629996 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 33128 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:15.632053 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:15.637246 systemd-logind[1535]: New session 16 of user core. Sep 11 00:27:15.647661 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:27:15.778161 sshd[5598]: Connection closed by 10.0.0.1 port 33128 Sep 11 00:27:15.793912 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:15.798125 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:33128.service: Deactivated successfully. Sep 11 00:27:15.800318 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:27:15.801834 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:27:15.803325 systemd-logind[1535]: Removed session 16. Sep 11 00:27:18.916416 kubelet[2726]: E0911 00:27:18.916361 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:27:19.916470 kubelet[2726]: E0911 00:27:19.916098 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:27:20.795900 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:44484.service - OpenSSH per-connection server daemon (10.0.0.1:44484). Sep 11 00:27:20.816659 containerd[1558]: time="2025-09-11T00:27:20.816216414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f09815846babb4f355890889ed74a54fdba5f8c588e26f5244c2617aaef7ef4\" id:\"207e4631c6794d6e1c2e2431e60d0efc1856c5b4a2e9388806067cc9320dec79\" pid:5623 exited_at:{seconds:1757550440 nanos:815533248}" Sep 11 00:27:20.887904 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 44484 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:20.890186 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:20.898813 systemd-logind[1535]: New session 17 of user core. Sep 11 00:27:20.905651 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:27:21.097977 sshd[5639]: Connection closed by 10.0.0.1 port 44484 Sep 11 00:27:21.098823 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:21.108401 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:44484.service: Deactivated successfully. Sep 11 00:27:21.110526 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:27:21.111350 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:27:21.114246 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:44492.service - OpenSSH per-connection server daemon (10.0.0.1:44492). Sep 11 00:27:21.114932 systemd-logind[1535]: Removed session 17. Sep 11 00:27:21.174448 sshd[5652]: Accepted publickey for core from 10.0.0.1 port 44492 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:21.176310 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:21.181129 systemd-logind[1535]: New session 18 of user core. Sep 11 00:27:21.190614 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:27:21.430112 sshd[5655]: Connection closed by 10.0.0.1 port 44492 Sep 11 00:27:21.430615 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:21.443969 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:44492.service: Deactivated successfully. Sep 11 00:27:21.446497 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:27:21.447295 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:27:21.451489 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:44504.service - OpenSSH per-connection server daemon (10.0.0.1:44504). Sep 11 00:27:21.453180 systemd-logind[1535]: Removed session 18. Sep 11 00:27:21.514328 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 44504 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:21.516253 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:21.522014 systemd-logind[1535]: New session 19 of user core. Sep 11 00:27:21.536743 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:27:23.117090 containerd[1558]: time="2025-09-11T00:27:23.117046025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\" id:\"02c4b853570e6518e789ae3f362125780b0095f609a5d42c8c9813bf0d2d3629\" pid:5693 exited_at:{seconds:1757550443 nanos:116718290}" Sep 11 00:27:23.215560 sshd[5669]: Connection closed by 10.0.0.1 port 44504 Sep 11 00:27:23.217859 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:23.235663 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:44520.service - OpenSSH per-connection server daemon (10.0.0.1:44520). Sep 11 00:27:23.236798 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:44504.service: Deactivated successfully. Sep 11 00:27:23.241822 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:27:23.242223 systemd[1]: session-19.scope: Consumed 646ms CPU time, 71.7M memory peak. Sep 11 00:27:23.244355 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:27:23.245844 systemd-logind[1535]: Removed session 19. Sep 11 00:27:23.293616 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 44520 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:23.295420 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:23.300869 systemd-logind[1535]: New session 20 of user core. Sep 11 00:27:23.310589 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:27:23.628058 sshd[5717]: Connection closed by 10.0.0.1 port 44520 Sep 11 00:27:23.629267 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:23.640342 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:44520.service: Deactivated successfully. Sep 11 00:27:23.642829 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:27:23.643682 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:27:23.647827 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:44524.service - OpenSSH per-connection server daemon (10.0.0.1:44524). Sep 11 00:27:23.649359 systemd-logind[1535]: Removed session 20. Sep 11 00:27:23.703731 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 44524 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:23.705491 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:23.710094 systemd-logind[1535]: New session 21 of user core. Sep 11 00:27:23.720591 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:27:23.837702 sshd[5732]: Connection closed by 10.0.0.1 port 44524 Sep 11 00:27:23.838059 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:23.842876 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:44524.service: Deactivated successfully. Sep 11 00:27:23.844954 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:27:23.845720 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:27:23.846771 systemd-logind[1535]: Removed session 21. Sep 11 00:27:28.854567 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:44530.service - OpenSSH per-connection server daemon (10.0.0.1:44530). Sep 11 00:27:28.937753 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 44530 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:28.939938 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:28.947388 systemd-logind[1535]: New session 22 of user core. Sep 11 00:27:28.954579 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:27:29.144308 sshd[5758]: Connection closed by 10.0.0.1 port 44530 Sep 11 00:27:29.146696 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:29.153903 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:27:29.155101 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:44530.service: Deactivated successfully. Sep 11 00:27:29.159799 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:27:29.166610 systemd-logind[1535]: Removed session 22. Sep 11 00:27:31.918403 kubelet[2726]: E0911 00:27:31.917859 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:27:34.162729 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:42936.service - OpenSSH per-connection server daemon (10.0.0.1:42936). Sep 11 00:27:34.222943 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 42936 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:34.224709 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:34.230222 systemd-logind[1535]: New session 23 of user core. Sep 11 00:27:34.237605 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:27:34.417362 sshd[5776]: Connection closed by 10.0.0.1 port 42936 Sep 11 00:27:34.417671 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:34.423586 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:42936.service: Deactivated successfully. Sep 11 00:27:34.425804 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:27:34.426749 systemd-logind[1535]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:27:34.428085 systemd-logind[1535]: Removed session 23. Sep 11 00:27:39.432457 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:42952.service - OpenSSH per-connection server daemon (10.0.0.1:42952). Sep 11 00:27:39.512218 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 42952 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:39.514346 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:39.520734 systemd-logind[1535]: New session 24 of user core. Sep 11 00:27:39.525661 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:27:39.693542 sshd[5795]: Connection closed by 10.0.0.1 port 42952 Sep 11 00:27:39.693783 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:39.699059 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:42952.service: Deactivated successfully. Sep 11 00:27:39.701287 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:27:39.702021 systemd-logind[1535]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:27:39.703325 systemd-logind[1535]: Removed session 24. Sep 11 00:27:41.386077 containerd[1558]: time="2025-09-11T00:27:41.385917646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14cb4f849b465183c01af57d9db85b9a502a39b52c5bdb37362118dc808c801e\" id:\"829284d3f0f66df7725cd8d2061cc52df8735cc0ad34e5d9972c4b07f4f37e8a\" pid:5820 exited_at:{seconds:1757550461 nanos:385530292}" Sep 11 00:27:42.916202 kubelet[2726]: E0911 00:27:42.916146 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:27:43.676365 containerd[1558]: time="2025-09-11T00:27:43.676319396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad2a3eb3f1344e6b3e61786a8257fa28709cc7ce57bd5bc2207dcf0f5d41d3bd\" id:\"1f360503b4ec301525eb7439c2102eec8b82110666a89969d8ca3c7523b7bedb\" pid:5845 exited_at:{seconds:1757550463 nanos:675787477}" Sep 11 00:27:44.713667 systemd[1]: Started sshd@25-10.0.0.132:22-10.0.0.1:56528.service - OpenSSH per-connection server daemon (10.0.0.1:56528). Sep 11 00:27:44.778525 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 56528 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:27:44.781019 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:27:44.786092 systemd-logind[1535]: New session 25 of user core. Sep 11 00:27:44.791694 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:27:44.989350 sshd[5859]: Connection closed by 10.0.0.1 port 56528 Sep 11 00:27:44.989732 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Sep 11 00:27:44.995005 systemd[1]: sshd@25-10.0.0.132:22-10.0.0.1:56528.service: Deactivated successfully. Sep 11 00:27:44.997031 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:27:44.997953 systemd-logind[1535]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:27:44.999247 systemd-logind[1535]: Removed session 25.