Sep 4 16:22:40.828295 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 14:31:01 -00 2025 Sep 4 16:22:40.828330 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:22:40.828340 kernel: BIOS-provided physical RAM map: Sep 4 16:22:40.828348 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 16:22:40.828354 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 16:22:40.828361 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 16:22:40.828369 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 16:22:40.828376 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 16:22:40.828394 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 16:22:40.828401 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 16:22:40.828408 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 4 16:22:40.828415 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 16:22:40.828422 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 16:22:40.828429 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 16:22:40.828440 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 16:22:40.828447 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 16:22:40.828454 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 4 16:22:40.828462 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 4 16:22:40.828469 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 4 16:22:40.828507 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 4 16:22:40.828514 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 16:22:40.828521 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 16:22:40.828532 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 16:22:40.828539 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 16:22:40.828546 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 16:22:40.828553 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 16:22:40.828569 kernel: NX (Execute Disable) protection: active Sep 4 16:22:40.828576 kernel: APIC: Static calls initialized Sep 4 16:22:40.828584 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 4 16:22:40.828591 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 4 16:22:40.828599 kernel: extended physical RAM map: Sep 4 16:22:40.828606 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 16:22:40.828613 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 16:22:40.828623 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 16:22:40.828630 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 16:22:40.828637 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 16:22:40.828645 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 4 16:22:40.828652 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 4 16:22:40.828659 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 4 16:22:40.828666 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 4 16:22:40.828679 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 4 16:22:40.828687 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 4 16:22:40.828694 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 4 16:22:40.828702 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 4 16:22:40.828709 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 4 16:22:40.828717 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 4 16:22:40.828725 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 4 16:22:40.828734 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 16:22:40.828742 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 4 16:22:40.828749 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 4 16:22:40.828757 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 4 16:22:40.828764 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 4 16:22:40.828772 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 4 16:22:40.828780 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 16:22:40.828787 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 4 16:22:40.828795 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 16:22:40.828802 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 4 16:22:40.828812 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 16:22:40.828822 kernel: efi: EFI v2.7 by EDK II Sep 4 16:22:40.828830 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 4 16:22:40.828838 kernel: random: crng init done Sep 4 16:22:40.828845 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 4 16:22:40.828853 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 4 16:22:40.828860 kernel: secureboot: Secure boot disabled Sep 4 16:22:40.828868 kernel: SMBIOS 2.8 present. Sep 4 16:22:40.828876 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 4 16:22:40.828883 kernel: DMI: Memory slots populated: 1/1 Sep 4 16:22:40.828891 kernel: Hypervisor detected: KVM Sep 4 16:22:40.828901 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 16:22:40.828909 kernel: kvm-clock: using sched offset of 4380592091 cycles Sep 4 16:22:40.828917 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 16:22:40.828925 kernel: tsc: Detected 2794.748 MHz processor Sep 4 16:22:40.828934 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 16:22:40.828942 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 16:22:40.828949 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 4 16:22:40.828957 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 16:22:40.828968 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 16:22:40.828976 kernel: Using GB pages for direct mapping Sep 4 16:22:40.828984 kernel: ACPI: Early table checksum verification disabled Sep 4 16:22:40.828992 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 16:22:40.829000 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 4 16:22:40.829008 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:22:40.829016 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:22:40.829023 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 16:22:40.829033 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:22:40.829041 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:22:40.829049 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:22:40.829057 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 16:22:40.829065 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 16:22:40.829073 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 4 16:22:40.829081 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 4 16:22:40.829091 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 16:22:40.829099 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 4 16:22:40.829107 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 4 16:22:40.829115 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 4 16:22:40.829122 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 4 16:22:40.829130 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 4 16:22:40.829138 kernel: No NUMA configuration found Sep 4 16:22:40.829148 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 4 16:22:40.829156 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 4 16:22:40.829164 kernel: Zone ranges: Sep 4 16:22:40.829172 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 16:22:40.829179 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 4 16:22:40.829187 kernel: Normal empty Sep 4 16:22:40.829195 kernel: Device empty Sep 4 16:22:40.829203 kernel: Movable zone start for each node Sep 4 16:22:40.829213 kernel: Early memory node ranges Sep 4 16:22:40.829221 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 16:22:40.829228 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 16:22:40.829236 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 16:22:40.829244 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 4 16:22:40.829252 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 4 16:22:40.829260 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 4 16:22:40.829269 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 4 16:22:40.829277 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 4 16:22:40.829285 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 4 16:22:40.829293 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 16:22:40.829303 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 16:22:40.829337 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 16:22:40.829348 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 16:22:40.829356 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 4 16:22:40.829364 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 4 16:22:40.829372 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 16:22:40.829389 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 4 16:22:40.829397 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 4 16:22:40.829405 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 16:22:40.829413 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 16:22:40.829423 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 16:22:40.829432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 16:22:40.829440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 16:22:40.829448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 16:22:40.829456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 16:22:40.829464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 16:22:40.829472 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 16:22:40.829482 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 16:22:40.829491 kernel: TSC deadline timer available Sep 4 16:22:40.829499 kernel: CPU topo: Max. logical packages: 1 Sep 4 16:22:40.829507 kernel: CPU topo: Max. logical dies: 1 Sep 4 16:22:40.829515 kernel: CPU topo: Max. dies per package: 1 Sep 4 16:22:40.829523 kernel: CPU topo: Max. threads per core: 1 Sep 4 16:22:40.829531 kernel: CPU topo: Num. cores per package: 4 Sep 4 16:22:40.829541 kernel: CPU topo: Num. threads per package: 4 Sep 4 16:22:40.829549 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 16:22:40.829557 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 16:22:40.829565 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 16:22:40.829573 kernel: kvm-guest: setup PV sched yield Sep 4 16:22:40.829581 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 4 16:22:40.829589 kernel: Booting paravirtualized kernel on KVM Sep 4 16:22:40.829600 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 16:22:40.829608 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 16:22:40.829617 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 16:22:40.829625 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 16:22:40.829633 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 16:22:40.829641 kernel: kvm-guest: PV spinlocks enabled Sep 4 16:22:40.829649 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 16:22:40.829661 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:22:40.829669 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 16:22:40.829678 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 16:22:40.829695 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 16:22:40.829711 kernel: Fallback order for Node 0: 0 Sep 4 16:22:40.829728 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 4 16:22:40.829744 kernel: Policy zone: DMA32 Sep 4 16:22:40.829756 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 16:22:40.829764 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 16:22:40.829773 kernel: ftrace: allocating 40102 entries in 157 pages Sep 4 16:22:40.829781 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 16:22:40.829789 kernel: Dynamic Preempt: voluntary Sep 4 16:22:40.829798 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 16:22:40.829807 kernel: rcu: RCU event tracing is enabled. Sep 4 16:22:40.829817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 16:22:40.829826 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 16:22:40.829834 kernel: Rude variant of Tasks RCU enabled. Sep 4 16:22:40.829842 kernel: Tracing variant of Tasks RCU enabled. Sep 4 16:22:40.829850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 16:22:40.829861 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 16:22:40.829869 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:22:40.829880 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:22:40.829888 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 16:22:40.829897 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 16:22:40.829905 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 16:22:40.829913 kernel: Console: colour dummy device 80x25 Sep 4 16:22:40.829921 kernel: printk: legacy console [ttyS0] enabled Sep 4 16:22:40.829929 kernel: ACPI: Core revision 20240827 Sep 4 16:22:40.829937 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 16:22:40.829948 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 16:22:40.829956 kernel: x2apic enabled Sep 4 16:22:40.829964 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 16:22:40.829972 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 16:22:40.829981 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 16:22:40.829989 kernel: kvm-guest: setup PV IPIs Sep 4 16:22:40.829997 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 16:22:40.830007 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 16:22:40.830016 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 4 16:22:40.830024 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 16:22:40.830032 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 16:22:40.830040 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 16:22:40.830048 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 16:22:40.830056 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 16:22:40.830067 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 16:22:40.830075 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 16:22:40.830083 kernel: active return thunk: retbleed_return_thunk Sep 4 16:22:40.830091 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 16:22:40.830099 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 16:22:40.830108 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 16:22:40.830116 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 16:22:40.830127 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 16:22:40.830135 kernel: active return thunk: srso_return_thunk Sep 4 16:22:40.830143 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 16:22:40.830151 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 16:22:40.830160 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 16:22:40.830168 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 16:22:40.830178 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 16:22:40.830186 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 16:22:40.830194 kernel: Freeing SMP alternatives memory: 32K Sep 4 16:22:40.830202 kernel: pid_max: default: 32768 minimum: 301 Sep 4 16:22:40.830210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 16:22:40.830219 kernel: landlock: Up and running. Sep 4 16:22:40.830227 kernel: SELinux: Initializing. Sep 4 16:22:40.830236 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:22:40.830245 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 16:22:40.830253 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 16:22:40.830261 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 16:22:40.830269 kernel: ... version: 0 Sep 4 16:22:40.830277 kernel: ... bit width: 48 Sep 4 16:22:40.830285 kernel: ... generic registers: 6 Sep 4 16:22:40.830294 kernel: ... value mask: 0000ffffffffffff Sep 4 16:22:40.830304 kernel: ... max period: 00007fffffffffff Sep 4 16:22:40.830312 kernel: ... fixed-purpose events: 0 Sep 4 16:22:40.830331 kernel: ... event mask: 000000000000003f Sep 4 16:22:40.830348 kernel: signal: max sigframe size: 1776 Sep 4 16:22:40.830357 kernel: rcu: Hierarchical SRCU implementation. Sep 4 16:22:40.830365 kernel: rcu: Max phase no-delay instances is 400. Sep 4 16:22:40.830377 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 16:22:40.830396 kernel: smp: Bringing up secondary CPUs ... Sep 4 16:22:40.830404 kernel: smpboot: x86: Booting SMP configuration: Sep 4 16:22:40.830412 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 16:22:40.830420 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 16:22:40.830428 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 4 16:22:40.830437 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54288K init, 2680K bss, 137196K reserved, 0K cma-reserved) Sep 4 16:22:40.830445 kernel: devtmpfs: initialized Sep 4 16:22:40.830456 kernel: x86/mm: Memory block size: 128MB Sep 4 16:22:40.830464 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 16:22:40.830472 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 16:22:40.830480 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 4 16:22:40.830489 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 16:22:40.830497 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 4 16:22:40.830505 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 16:22:40.830515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 16:22:40.830524 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 16:22:40.830532 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 16:22:40.830540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 16:22:40.830548 kernel: audit: initializing netlink subsys (disabled) Sep 4 16:22:40.830557 kernel: audit: type=2000 audit(1757002959.061:1): state=initialized audit_enabled=0 res=1 Sep 4 16:22:40.830565 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 16:22:40.830575 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 16:22:40.830583 kernel: cpuidle: using governor menu Sep 4 16:22:40.830592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 16:22:40.830600 kernel: dca service started, version 1.12.1 Sep 4 16:22:40.830608 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 4 16:22:40.830616 kernel: PCI: Using configuration type 1 for base access Sep 4 16:22:40.830624 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 16:22:40.830634 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 16:22:40.830643 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 16:22:40.830651 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 16:22:40.830659 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 16:22:40.830667 kernel: ACPI: Added _OSI(Module Device) Sep 4 16:22:40.830683 kernel: ACPI: Added _OSI(Processor Device) Sep 4 16:22:40.830698 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 16:22:40.830721 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 16:22:40.830736 kernel: ACPI: Interpreter enabled Sep 4 16:22:40.830744 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 16:22:40.830752 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 16:22:40.830761 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 16:22:40.830769 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 16:22:40.830777 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 16:22:40.830787 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 16:22:40.831110 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 16:22:40.831283 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 16:22:40.831504 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 16:22:40.831516 kernel: PCI host bridge to bus 0000:00 Sep 4 16:22:40.831690 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 16:22:40.831844 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 16:22:40.831992 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 16:22:40.832142 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 4 16:22:40.832351 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 4 16:22:40.832523 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 4 16:22:40.832676 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 16:22:40.832870 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 16:22:40.833047 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 16:22:40.833232 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 4 16:22:40.833427 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 4 16:22:40.833593 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 4 16:22:40.833756 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 16:22:40.833934 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 16:22:40.834095 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 4 16:22:40.834256 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 4 16:22:40.834439 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 4 16:22:40.834620 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 16:22:40.834786 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 4 16:22:40.834947 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 4 16:22:40.835108 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 4 16:22:40.835286 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 16:22:40.835471 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 4 16:22:40.835635 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 4 16:22:40.835796 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 4 16:22:40.835960 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 4 16:22:40.836136 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 16:22:40.836296 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 16:22:40.836493 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 16:22:40.836657 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 4 16:22:40.836816 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 4 16:22:40.836993 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 16:22:40.837152 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 4 16:22:40.837164 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 16:22:40.837176 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 16:22:40.837185 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 16:22:40.837194 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 16:22:40.837202 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 16:22:40.837210 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 16:22:40.837219 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 16:22:40.837227 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 16:22:40.837238 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 16:22:40.837246 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 16:22:40.837254 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 16:22:40.837262 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 16:22:40.837271 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 16:22:40.837279 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 16:22:40.837287 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 16:22:40.837298 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 16:22:40.837306 kernel: iommu: Default domain type: Translated Sep 4 16:22:40.837315 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 16:22:40.837347 kernel: efivars: Registered efivars operations Sep 4 16:22:40.837356 kernel: PCI: Using ACPI for IRQ routing Sep 4 16:22:40.837364 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 16:22:40.837373 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 16:22:40.837388 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 4 16:22:40.837399 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 4 16:22:40.837407 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 4 16:22:40.837415 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 4 16:22:40.837423 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 4 16:22:40.837432 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 4 16:22:40.837440 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 4 16:22:40.837605 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 16:22:40.837764 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 16:22:40.837926 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 16:22:40.837938 kernel: vgaarb: loaded Sep 4 16:22:40.837947 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 16:22:40.837955 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 16:22:40.837964 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 16:22:40.837975 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 16:22:40.837984 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 16:22:40.837993 kernel: pnp: PnP ACPI init Sep 4 16:22:40.838220 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 4 16:22:40.838236 kernel: pnp: PnP ACPI: found 6 devices Sep 4 16:22:40.838245 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 16:22:40.838256 kernel: NET: Registered PF_INET protocol family Sep 4 16:22:40.838265 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 16:22:40.838273 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 16:22:40.838282 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 16:22:40.838291 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 16:22:40.838299 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 16:22:40.838308 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 16:22:40.838332 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:22:40.838341 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 16:22:40.838350 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 16:22:40.838358 kernel: NET: Registered PF_XDP protocol family Sep 4 16:22:40.838528 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 4 16:22:40.838690 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 4 16:22:40.838839 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 16:22:40.838991 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 16:22:40.839139 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 16:22:40.839286 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 4 16:22:40.839455 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 4 16:22:40.839604 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 4 16:22:40.839615 kernel: PCI: CLS 0 bytes, default 64 Sep 4 16:22:40.839628 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 16:22:40.839637 kernel: Initialise system trusted keyrings Sep 4 16:22:40.839648 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 16:22:40.839657 kernel: Key type asymmetric registered Sep 4 16:22:40.839665 kernel: Asymmetric key parser 'x509' registered Sep 4 16:22:40.839676 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 16:22:40.839685 kernel: io scheduler mq-deadline registered Sep 4 16:22:40.839694 kernel: io scheduler kyber registered Sep 4 16:22:40.839702 kernel: io scheduler bfq registered Sep 4 16:22:40.839711 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 16:22:40.839720 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 16:22:40.839729 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 16:22:40.839740 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 16:22:40.839749 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 16:22:40.839757 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 16:22:40.839766 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 16:22:40.839775 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 16:22:40.839783 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 16:22:40.839951 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 16:22:40.839967 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 16:22:40.840119 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 16:22:40.840272 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T16:22:40 UTC (1757002960) Sep 4 16:22:40.840449 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 4 16:22:40.840461 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 16:22:40.840470 kernel: efifb: probing for efifb Sep 4 16:22:40.840482 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 4 16:22:40.840491 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 4 16:22:40.840500 kernel: efifb: scrolling: redraw Sep 4 16:22:40.840508 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 16:22:40.840517 kernel: Console: switching to colour frame buffer device 160x50 Sep 4 16:22:40.840525 kernel: fb0: EFI VGA frame buffer device Sep 4 16:22:40.840534 kernel: pstore: Using crash dump compression: deflate Sep 4 16:22:40.840543 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 16:22:40.840553 kernel: NET: Registered PF_INET6 protocol family Sep 4 16:22:40.840562 kernel: Segment Routing with IPv6 Sep 4 16:22:40.840571 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 16:22:40.840581 kernel: NET: Registered PF_PACKET protocol family Sep 4 16:22:40.840590 kernel: Key type dns_resolver registered Sep 4 16:22:40.840599 kernel: IPI shorthand broadcast: enabled Sep 4 16:22:40.840607 kernel: sched_clock: Marking stable (3224002634, 151523493)->(3392463319, -16937192) Sep 4 16:22:40.840618 kernel: registered taskstats version 1 Sep 4 16:22:40.840627 kernel: Loading compiled-in X.509 certificates Sep 4 16:22:40.840636 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 250d2bafae7fa56c92cf187a0b8b7b2cdd349fc7' Sep 4 16:22:40.840645 kernel: Demotion targets for Node 0: null Sep 4 16:22:40.840654 kernel: Key type .fscrypt registered Sep 4 16:22:40.840662 kernel: Key type fscrypt-provisioning registered Sep 4 16:22:40.840671 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 16:22:40.840682 kernel: ima: Allocated hash algorithm: sha1 Sep 4 16:22:40.840690 kernel: ima: No architecture policies found Sep 4 16:22:40.840698 kernel: clk: Disabling unused clocks Sep 4 16:22:40.840707 kernel: Warning: unable to open an initial console. Sep 4 16:22:40.840716 kernel: Freeing unused kernel image (initmem) memory: 54288K Sep 4 16:22:40.840724 kernel: Write protecting the kernel read-only data: 24576k Sep 4 16:22:40.840733 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 4 16:22:40.840744 kernel: Run /init as init process Sep 4 16:22:40.840753 kernel: with arguments: Sep 4 16:22:40.840762 kernel: /init Sep 4 16:22:40.840771 kernel: with environment: Sep 4 16:22:40.840779 kernel: HOME=/ Sep 4 16:22:40.840788 kernel: TERM=linux Sep 4 16:22:40.840796 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 16:22:40.840808 systemd[1]: Successfully made /usr/ read-only. Sep 4 16:22:40.840820 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:22:40.840830 systemd[1]: Detected virtualization kvm. Sep 4 16:22:40.840839 systemd[1]: Detected architecture x86-64. Sep 4 16:22:40.840848 systemd[1]: Running in initrd. Sep 4 16:22:40.840857 systemd[1]: No hostname configured, using default hostname. Sep 4 16:22:40.840869 systemd[1]: Hostname set to . Sep 4 16:22:40.840878 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:22:40.840886 systemd[1]: Queued start job for default target initrd.target. Sep 4 16:22:40.840896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:22:40.840905 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:22:40.840915 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 16:22:40.840927 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:22:40.840936 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 16:22:40.840947 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 16:22:40.840957 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 16:22:40.840967 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 16:22:40.840976 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:22:40.840987 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:22:40.840997 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:22:40.841006 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:22:40.841015 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:22:40.841024 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:22:40.841033 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:22:40.841042 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:22:40.841054 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 16:22:40.841063 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 16:22:40.841072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:22:40.841081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:22:40.841091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:22:40.841100 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:22:40.841112 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 16:22:40.841121 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:22:40.841130 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 16:22:40.841142 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 16:22:40.841151 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 16:22:40.841161 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:22:40.841170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:22:40.841181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:22:40.841191 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 16:22:40.841200 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:22:40.841210 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 16:22:40.841221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 16:22:40.841230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:22:40.841260 systemd-journald[220]: Collecting audit messages is disabled. Sep 4 16:22:40.841284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:22:40.841294 systemd-journald[220]: Journal started Sep 4 16:22:40.841313 systemd-journald[220]: Runtime Journal (/run/log/journal/c410afbeb05b4b0a99453cc6c9403341) is 6M, max 48.5M, 42.4M free. Sep 4 16:22:40.831120 systemd-modules-load[223]: Inserted module 'overlay' Sep 4 16:22:40.855392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:22:40.858917 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:22:40.861474 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 16:22:40.861969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:22:40.865525 kernel: Bridge firewalling registered Sep 4 16:22:40.863984 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 4 16:22:40.864890 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 16:22:40.869234 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:22:40.871994 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:22:40.881094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:22:40.888696 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 16:22:40.893938 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:22:40.898155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:22:40.899861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:22:40.911499 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:22:40.914018 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 16:22:40.940119 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39929ed91cc8dec12f10b74359379a21a9960032f4b779521fabb4147461485b Sep 4 16:22:40.946293 systemd-resolved[260]: Positive Trust Anchors: Sep 4 16:22:40.946307 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:22:40.946311 systemd-resolved[260]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:22:40.946354 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:22:40.948801 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 4 16:22:40.949843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:22:40.956812 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:22:41.051385 kernel: SCSI subsystem initialized Sep 4 16:22:41.060357 kernel: Loading iSCSI transport class v2.0-870. Sep 4 16:22:41.070353 kernel: iscsi: registered transport (tcp) Sep 4 16:22:41.091416 kernel: iscsi: registered transport (qla4xxx) Sep 4 16:22:41.091458 kernel: QLogic iSCSI HBA Driver Sep 4 16:22:41.112267 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:22:41.128138 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:22:41.132149 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:22:41.182688 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 16:22:41.185173 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 16:22:41.243353 kernel: raid6: avx2x4 gen() 30531 MB/s Sep 4 16:22:41.260345 kernel: raid6: avx2x2 gen() 31040 MB/s Sep 4 16:22:41.277387 kernel: raid6: avx2x1 gen() 25895 MB/s Sep 4 16:22:41.277404 kernel: raid6: using algorithm avx2x2 gen() 31040 MB/s Sep 4 16:22:41.295393 kernel: raid6: .... xor() 19962 MB/s, rmw enabled Sep 4 16:22:41.295419 kernel: raid6: using avx2x2 recovery algorithm Sep 4 16:22:41.315348 kernel: xor: automatically using best checksumming function avx Sep 4 16:22:41.478392 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 16:22:41.487573 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:22:41.490228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:22:41.522971 systemd-udevd[474]: Using default interface naming scheme 'v257'. Sep 4 16:22:41.529455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:22:41.530293 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 16:22:41.554352 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 4 16:22:41.584743 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:22:41.588422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:22:41.660974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:22:41.664456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 16:22:41.706345 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 16:22:41.717806 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 16:22:41.722360 kernel: libata version 3.00 loaded. Sep 4 16:22:41.726365 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 16:22:41.726431 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 16:22:41.729378 kernel: GPT:9289727 != 19775487 Sep 4 16:22:41.729406 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 16:22:41.729419 kernel: GPT:9289727 != 19775487 Sep 4 16:22:41.729435 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 16:22:41.729445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:22:41.734351 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 16:22:41.736372 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 16:22:41.741520 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 16:22:41.741551 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 16:22:41.742109 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 16:22:41.742339 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 16:22:41.745341 kernel: scsi host0: ahci Sep 4 16:22:41.746570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:22:41.746983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:22:41.750339 kernel: scsi host1: ahci Sep 4 16:22:41.751521 kernel: scsi host2: ahci Sep 4 16:22:41.751454 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:22:41.756333 kernel: AES CTR mode by8 optimization enabled Sep 4 16:22:41.756381 kernel: scsi host3: ahci Sep 4 16:22:41.755084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:22:41.761343 kernel: scsi host4: ahci Sep 4 16:22:41.761872 kernel: scsi host5: ahci Sep 4 16:22:41.764545 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Sep 4 16:22:41.764586 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Sep 4 16:22:41.764604 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Sep 4 16:22:41.764615 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Sep 4 16:22:41.764626 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Sep 4 16:22:41.764636 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Sep 4 16:22:41.763439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:22:41.763544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:22:41.811619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 16:22:41.831824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:22:41.838416 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 16:22:41.838490 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 16:22:41.848822 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 16:22:41.849704 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 16:22:41.853336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:22:41.885390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:22:42.078378 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 16:22:42.078476 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 16:22:42.079413 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 16:22:42.079502 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 16:22:42.080354 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 16:22:42.081364 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 16:22:42.082554 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 16:22:42.082569 kernel: ata3.00: applying bridge limits Sep 4 16:22:42.083364 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 16:22:42.083385 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 16:22:42.134476 kernel: ata3.00: configured for UDMA/100 Sep 4 16:22:42.135380 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 16:22:42.194353 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 16:22:42.194630 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 16:22:42.220649 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 16:22:42.297258 disk-uuid[623]: Primary Header is updated. Sep 4 16:22:42.297258 disk-uuid[623]: Secondary Entries is updated. Sep 4 16:22:42.297258 disk-uuid[623]: Secondary Header is updated. Sep 4 16:22:42.302375 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:22:42.307353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:22:42.516260 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 16:22:42.518823 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:22:42.520264 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:22:42.523165 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:22:42.526758 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 16:22:42.561730 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:22:43.329363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 16:22:43.330080 disk-uuid[641]: The operation has completed successfully. Sep 4 16:22:43.362917 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 16:22:43.363036 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 16:22:43.400789 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 16:22:43.431532 sh[668]: Success Sep 4 16:22:43.449350 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 16:22:43.449381 kernel: device-mapper: uevent: version 1.0.3 Sep 4 16:22:43.449393 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 16:22:43.459361 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 16:22:43.488756 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 16:22:43.491617 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 16:22:43.514414 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 16:22:43.520217 kernel: BTRFS: device fsid ac7b5b49-8d71-4968-afd7-5e4410595bf4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (680) Sep 4 16:22:43.520244 kernel: BTRFS info (device dm-0): first mount of filesystem ac7b5b49-8d71-4968-afd7-5e4410595bf4 Sep 4 16:22:43.520261 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:22:43.525626 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 16:22:43.525645 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 16:22:43.526959 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 16:22:43.529273 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:22:43.531462 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 16:22:43.532530 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 16:22:43.535992 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 16:22:43.563258 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 4 16:22:43.563363 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:22:43.563381 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:22:43.567613 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:22:43.567711 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:22:43.573384 kernel: BTRFS info (device vda6): last unmount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:22:43.575280 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 16:22:43.578838 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 16:22:43.672634 ignition[754]: Ignition 2.22.0 Sep 4 16:22:43.672647 ignition[754]: Stage: fetch-offline Sep 4 16:22:43.672678 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:22:43.672686 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:22:43.672767 ignition[754]: parsed url from cmdline: "" Sep 4 16:22:43.672771 ignition[754]: no config URL provided Sep 4 16:22:43.672776 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 16:22:43.672784 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 4 16:22:43.672805 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 4 16:22:43.672810 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 16:22:43.682183 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:22:43.683377 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 4 16:22:43.686468 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:22:43.723278 ignition[754]: parsing config with SHA512: 28704429e3ee63742aaf3b1523c5f9254fe2edd757b4433e5aece743640ca319302274b74e6ef7298dfd0b23a736ae05a6454debac8a3135a240ce198c2e1813 Sep 4 16:22:43.728398 unknown[754]: fetched base config from "system" Sep 4 16:22:43.728409 unknown[754]: fetched user config from "qemu" Sep 4 16:22:43.730150 ignition[754]: fetch-offline: fetch-offline passed Sep 4 16:22:43.730932 ignition[754]: Ignition finished successfully Sep 4 16:22:43.733842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:22:43.737240 systemd-networkd[858]: lo: Link UP Sep 4 16:22:43.737254 systemd-networkd[858]: lo: Gained carrier Sep 4 16:22:43.739200 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:22:43.739519 systemd-networkd[858]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:22:43.739525 systemd-networkd[858]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:22:43.740610 systemd-networkd[858]: eth0: Link UP Sep 4 16:22:43.740784 systemd[1]: Reached target network.target - Network. Sep 4 16:22:43.740855 systemd-networkd[858]: eth0: Gained carrier Sep 4 16:22:43.740866 systemd-networkd[858]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:22:43.741747 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 16:22:43.742612 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 16:22:43.759377 systemd-networkd[858]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:22:43.779453 ignition[863]: Ignition 2.22.0 Sep 4 16:22:43.779465 ignition[863]: Stage: kargs Sep 4 16:22:43.779588 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:22:43.779598 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:22:43.780284 ignition[863]: kargs: kargs passed Sep 4 16:22:43.780343 ignition[863]: Ignition finished successfully Sep 4 16:22:43.784659 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 16:22:43.785775 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 16:22:43.830069 ignition[872]: Ignition 2.22.0 Sep 4 16:22:43.830089 ignition[872]: Stage: disks Sep 4 16:22:43.830215 ignition[872]: no configs at "/usr/lib/ignition/base.d" Sep 4 16:22:43.830225 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:22:43.830898 ignition[872]: disks: disks passed Sep 4 16:22:43.833909 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 16:22:43.830939 ignition[872]: Ignition finished successfully Sep 4 16:22:43.834435 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 16:22:43.834728 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 16:22:43.835028 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:22:43.835512 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:22:43.835829 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:22:43.837133 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 16:22:43.864665 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 16:22:44.071471 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 16:22:44.072611 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 16:22:44.182369 kernel: EXT4-fs (vda9): mounted filesystem 5b9a7850-c07f-470b-a91c-362c3904243c r/w with ordered data mode. Quota mode: none. Sep 4 16:22:44.183269 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 16:22:44.183961 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 16:22:44.186273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:22:44.189202 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 16:22:44.190635 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 16:22:44.190673 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 16:22:44.190700 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:22:44.203613 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 16:22:44.206392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 16:22:44.209874 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Sep 4 16:22:44.212354 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:22:44.212378 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:22:44.215351 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:22:44.215427 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:22:44.217478 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:22:44.257064 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 16:22:44.262509 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Sep 4 16:22:44.267549 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 16:22:44.271538 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 16:22:44.359814 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 16:22:44.360964 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 16:22:44.363952 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 16:22:44.388354 kernel: BTRFS info (device vda6): last unmount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:22:44.400855 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 16:22:44.418566 ignition[1004]: INFO : Ignition 2.22.0 Sep 4 16:22:44.418566 ignition[1004]: INFO : Stage: mount Sep 4 16:22:44.420736 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:22:44.420736 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:22:44.420736 ignition[1004]: INFO : mount: mount passed Sep 4 16:22:44.420736 ignition[1004]: INFO : Ignition finished successfully Sep 4 16:22:44.422628 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 16:22:44.425492 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 16:22:44.520000 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 16:22:44.522162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 16:22:44.555346 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1016) Sep 4 16:22:44.557417 kernel: BTRFS info (device vda6): first mount of filesystem c498a12e-1387-4e64-bf04-402560df6433 Sep 4 16:22:44.557432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 16:22:44.560428 kernel: BTRFS info (device vda6): turning on async discard Sep 4 16:22:44.560500 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 16:22:44.562591 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 16:22:44.607072 ignition[1033]: INFO : Ignition 2.22.0 Sep 4 16:22:44.607072 ignition[1033]: INFO : Stage: files Sep 4 16:22:44.609310 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:22:44.609310 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:22:44.609310 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Sep 4 16:22:44.609310 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 16:22:44.609310 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 16:22:44.615959 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 16:22:44.615959 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 16:22:44.615959 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 16:22:44.615959 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 16:22:44.615959 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 16:22:44.612459 unknown[1033]: wrote ssh authorized keys file for user: core Sep 4 16:22:44.669307 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 16:22:44.863098 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 16:22:44.863098 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 16:22:44.867010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 16:22:44.867010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:22:44.867010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 16:22:44.867010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:22:44.867010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 16:22:44.867010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:22:44.867010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 16:22:44.879173 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:22:44.879173 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 16:22:44.879173 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:22:44.879173 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:22:44.879173 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:22:44.879173 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 16:22:45.503507 systemd-networkd[858]: eth0: Gained IPv6LL Sep 4 16:22:45.660072 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 16:22:45.955854 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 16:22:45.955854 ignition[1033]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 16:22:45.959649 ignition[1033]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:22:45.963031 ignition[1033]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 16:22:45.963031 ignition[1033]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 16:22:45.963031 ignition[1033]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 16:22:45.967751 ignition[1033]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:22:45.967751 ignition[1033]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 16:22:45.967751 ignition[1033]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 16:22:45.967751 ignition[1033]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 16:22:45.982974 ignition[1033]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:22:46.930649 ignition[1033]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 16:22:46.932874 ignition[1033]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 16:22:46.932874 ignition[1033]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 16:22:46.932874 ignition[1033]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 16:22:46.932874 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:22:46.932874 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 16:22:46.932874 ignition[1033]: INFO : files: files passed Sep 4 16:22:46.932874 ignition[1033]: INFO : Ignition finished successfully Sep 4 16:22:46.945026 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 16:22:46.947621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 16:22:46.949883 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 16:22:46.970799 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 16:22:46.970947 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 16:22:46.974987 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 16:22:46.978237 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:22:46.978237 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:22:46.981866 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 16:22:46.986098 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:22:46.987600 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 16:22:46.990526 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 16:22:47.034524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 16:22:47.034684 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 16:22:47.036932 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 16:22:47.037943 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 16:22:47.040766 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 16:22:47.043059 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 16:22:47.066808 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:22:47.069224 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 16:22:47.099429 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:22:47.099587 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:22:47.102760 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 16:22:47.103842 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 16:22:47.103959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 16:22:47.107378 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 16:22:47.108452 systemd[1]: Stopped target basic.target - Basic System. Sep 4 16:22:47.108759 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 16:22:47.109076 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 16:22:47.109565 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 16:22:47.109883 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 16:22:47.110208 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 16:22:47.110689 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 16:22:47.111024 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 16:22:47.111355 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 16:22:47.111807 systemd[1]: Stopped target swap.target - Swaps. Sep 4 16:22:47.112097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 16:22:47.112199 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 16:22:47.128185 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:22:47.128697 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:22:47.128972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 16:22:47.129102 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:22:47.134399 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 16:22:47.134512 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 16:22:47.138466 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 16:22:47.138578 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 16:22:47.139524 systemd[1]: Stopped target paths.target - Path Units. Sep 4 16:22:47.139754 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 16:22:47.146402 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:22:47.149392 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 16:22:47.149523 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 16:22:47.153005 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 16:22:47.153128 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 16:22:47.154031 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 16:22:47.154135 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 16:22:47.156042 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 16:22:47.156210 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 16:22:47.158042 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 16:22:47.158182 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 16:22:47.162853 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 16:22:47.165625 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 16:22:47.167229 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 16:22:47.167366 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:22:47.167862 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 16:22:47.167957 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:22:47.168758 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 16:22:47.168896 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 16:22:47.173311 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 16:22:47.198596 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 16:22:47.221486 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 16:22:47.234130 ignition[1088]: INFO : Ignition 2.22.0 Sep 4 16:22:47.234130 ignition[1088]: INFO : Stage: umount Sep 4 16:22:47.235817 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 16:22:47.235817 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 16:22:47.235817 ignition[1088]: INFO : umount: umount passed Sep 4 16:22:47.235817 ignition[1088]: INFO : Ignition finished successfully Sep 4 16:22:47.238332 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 16:22:47.238457 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 16:22:47.238891 systemd[1]: Stopped target network.target - Network. Sep 4 16:22:47.241755 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 16:22:47.241811 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 16:22:47.242763 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 16:22:47.242809 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 16:22:47.243072 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 16:22:47.243118 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 16:22:47.243564 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 16:22:47.243609 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 16:22:47.243960 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 16:22:47.244260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 16:22:47.259222 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 16:22:47.259385 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 16:22:47.263216 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 16:22:47.263354 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 16:22:47.268419 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 16:22:47.268573 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 16:22:47.268625 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:22:47.271485 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 16:22:47.272312 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 16:22:47.272375 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 16:22:47.272853 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 16:22:47.272896 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:22:47.273138 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 16:22:47.273178 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 16:22:47.273798 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:22:47.307657 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 16:22:47.307927 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:22:47.312052 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 16:22:47.312160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 16:22:47.315140 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 16:22:47.315202 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:22:47.315597 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 16:22:47.315657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 16:22:47.316277 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 16:22:47.316357 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 16:22:47.317041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 16:22:47.317098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 16:22:47.333102 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 16:22:47.335396 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 16:22:47.335485 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:22:47.336798 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 16:22:47.336867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:22:47.338958 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 16:22:47.339018 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:22:47.341485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 16:22:47.341545 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:22:47.343736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:22:47.343784 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:22:47.347021 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 16:22:47.347146 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 16:22:47.352495 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 16:22:47.352606 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 16:22:47.663506 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 16:22:47.663660 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 16:22:47.665806 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 16:22:47.666427 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 16:22:47.666493 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 16:22:47.669248 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 16:22:47.690707 systemd[1]: Switching root. Sep 4 16:22:47.725464 systemd-journald[220]: Journal stopped Sep 4 16:22:50.084897 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 4 16:22:50.084998 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 16:22:50.085023 kernel: SELinux: policy capability open_perms=1 Sep 4 16:22:50.085043 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 16:22:50.085055 kernel: SELinux: policy capability always_check_network=0 Sep 4 16:22:50.085067 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 16:22:50.085080 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 16:22:50.085098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 16:22:50.085111 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 16:22:50.085131 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 16:22:50.085146 kernel: audit: type=1403 audit(1757002969.010:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 16:22:50.085191 systemd[1]: Successfully loaded SELinux policy in 65.187ms. Sep 4 16:22:50.085223 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.407ms. Sep 4 16:22:50.085237 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 16:22:50.085257 systemd[1]: Detected virtualization kvm. Sep 4 16:22:50.085270 systemd[1]: Detected architecture x86-64. Sep 4 16:22:50.085283 systemd[1]: Detected first boot. Sep 4 16:22:50.085296 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Sep 4 16:22:50.085309 zram_generator::config[1133]: No configuration found. Sep 4 16:22:50.085335 kernel: Guest personality initialized and is inactive Sep 4 16:22:50.085347 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 16:22:50.085370 kernel: Initialized host personality Sep 4 16:22:50.085407 kernel: NET: Registered PF_VSOCK protocol family Sep 4 16:22:50.085421 systemd[1]: Populated /etc with preset unit settings. Sep 4 16:22:50.085434 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 16:22:50.085447 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 16:22:50.085465 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 16:22:50.085480 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 16:22:50.085498 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 16:22:50.085516 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 16:22:50.085529 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 16:22:50.085545 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 16:22:50.085558 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 16:22:50.085572 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 16:22:50.085589 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 16:22:50.085603 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 16:22:50.085617 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 16:22:50.085630 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 16:22:50.085645 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 16:22:50.085659 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 16:22:50.085672 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 16:22:50.085693 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 16:22:50.085707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 16:22:50.085720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 16:22:50.085733 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 16:22:50.085746 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 16:22:50.085760 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 16:22:50.085772 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 16:22:50.085790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 16:22:50.085803 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 16:22:50.085816 systemd[1]: Reached target slices.target - Slice Units. Sep 4 16:22:50.085829 systemd[1]: Reached target swap.target - Swaps. Sep 4 16:22:50.085842 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 16:22:50.085855 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 16:22:50.085868 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 16:22:50.085886 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 16:22:50.085900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 16:22:50.085912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 16:22:50.085925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 16:22:50.085939 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 16:22:50.085952 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 16:22:50.085965 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 16:22:50.085983 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:50.085996 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 16:22:50.086009 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 16:22:50.086022 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 16:22:50.086036 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 16:22:50.086049 systemd[1]: Reached target machines.target - Containers. Sep 4 16:22:50.086062 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 16:22:50.086080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:22:50.086094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 16:22:50.086107 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 16:22:50.086128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:22:50.086141 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:22:50.086154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:22:50.086167 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 16:22:50.086187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:22:50.086200 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 16:22:50.086214 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 16:22:50.086228 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 16:22:50.086240 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 16:22:50.086253 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 16:22:50.086270 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:22:50.086283 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 16:22:50.086297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 16:22:50.086310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 16:22:50.086365 systemd-journald[1197]: Collecting audit messages is disabled. Sep 4 16:22:50.086390 systemd-journald[1197]: Journal started Sep 4 16:22:50.086415 systemd-journald[1197]: Runtime Journal (/run/log/journal/c410afbeb05b4b0a99453cc6c9403341) is 6M, max 48.5M, 42.4M free. Sep 4 16:22:49.760957 systemd[1]: Queued start job for default target multi-user.target. Sep 4 16:22:49.789380 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 16:22:49.790044 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 16:22:50.089438 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 16:22:50.094822 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 16:22:50.096549 kernel: fuse: init (API version 7.41) Sep 4 16:22:50.096643 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 16:22:50.100344 kernel: loop: module loaded Sep 4 16:22:50.101727 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 16:22:50.101760 systemd[1]: Stopped verity-setup.service. Sep 4 16:22:50.102595 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:50.109700 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 16:22:50.111194 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 16:22:50.112439 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 16:22:50.113764 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 16:22:50.115007 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 16:22:50.116214 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 16:22:50.117512 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 16:22:50.118890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 16:22:50.120513 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 16:22:50.120734 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 16:22:50.122330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:22:50.122545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:22:50.124134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:22:50.124354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:22:50.126101 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 16:22:50.126341 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 16:22:50.127835 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:22:50.128046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:22:50.129560 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 16:22:50.138767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 16:22:50.141756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 16:22:50.143039 kernel: ACPI: bus type drm_connector registered Sep 4 16:22:50.143529 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:22:50.143768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:22:50.145285 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 16:22:50.159831 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 16:22:50.161442 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Sep 4 16:22:50.163880 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 16:22:50.165992 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 16:22:50.167136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 16:22:50.167164 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 16:22:50.169030 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 16:22:50.172813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:22:50.178137 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 16:22:50.181313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 16:22:50.182485 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:22:50.184083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 16:22:50.185353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:22:50.186440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 16:22:50.188900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 16:22:50.192027 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 16:22:50.198698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 16:22:50.203457 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 16:22:50.205582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 16:22:50.208186 systemd-journald[1197]: Time spent on flushing to /var/log/journal/c410afbeb05b4b0a99453cc6c9403341 is 14.341ms for 1066 entries. Sep 4 16:22:50.208186 systemd-journald[1197]: System Journal (/var/log/journal/c410afbeb05b4b0a99453cc6c9403341) is 8M, max 195.6M, 187.6M free. Sep 4 16:22:50.899970 systemd-journald[1197]: Received client request to flush runtime journal. Sep 4 16:22:50.900902 kernel: loop0: detected capacity change from 0 to 128016 Sep 4 16:22:50.901022 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 16:22:50.901056 kernel: loop1: detected capacity change from 0 to 111000 Sep 4 16:22:50.901087 kernel: loop2: detected capacity change from 0 to 224512 Sep 4 16:22:50.901130 kernel: loop3: detected capacity change from 0 to 128016 Sep 4 16:22:50.901162 kernel: loop4: detected capacity change from 0 to 111000 Sep 4 16:22:50.901191 kernel: loop5: detected capacity change from 0 to 224512 Sep 4 16:22:50.901220 zram_generator::config[1290]: No configuration found. Sep 4 16:22:50.222023 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 4 16:22:50.222036 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 4 16:22:50.223907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 16:22:50.227799 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 16:22:50.398527 (sd-merge)[1259]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Sep 4 16:22:50.405002 (sd-merge)[1259]: Merged extensions into '/usr'. Sep 4 16:22:50.409840 systemd[1]: Reload requested from client PID 1230 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 16:22:50.409855 systemd[1]: Reloading... Sep 4 16:22:50.768478 systemd[1]: Reloading finished in 357 ms. Sep 4 16:22:50.800596 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 16:22:50.808757 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 16:22:50.814476 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 16:22:50.824498 systemd[1]: Starting ensure-sysext.service... Sep 4 16:22:50.826486 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 16:22:50.897084 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Sep 4 16:22:50.897107 systemd[1]: Reloading... Sep 4 16:22:50.959354 zram_generator::config[1361]: No configuration found. Sep 4 16:22:51.136549 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 16:22:51.136870 systemd[1]: Reloading finished in 239 ms. Sep 4 16:22:51.164119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 16:22:51.165839 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 16:22:51.167417 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 16:22:51.185961 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 16:22:51.196974 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 16:22:51.200572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:51.200941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:22:51.202413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:22:51.205168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:22:51.216344 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:22:51.217736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:22:51.217952 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:22:51.218067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:51.220144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:22:51.220469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:22:51.222503 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:22:51.222790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:22:51.224870 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:22:51.225255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:22:51.231999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:51.232315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:22:51.234469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:22:51.237463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:22:51.242698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 16:22:51.244020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:22:51.244160 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:22:51.244274 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:51.246998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:22:51.247542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:22:51.249739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:22:51.250010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:22:51.256404 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 16:22:51.256880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 16:22:51.263846 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:51.264246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 16:22:51.266261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 16:22:51.268943 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 16:22:51.274564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 16:22:51.276290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 16:22:51.276387 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 16:22:51.276536 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 16:22:51.277502 systemd[1]: Finished ensure-sysext.service. Sep 4 16:22:51.278910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 16:22:51.279337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 16:22:51.281073 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 16:22:51.281449 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 16:22:51.284727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 16:22:51.285007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 16:22:51.289262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 16:22:51.289390 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 16:22:51.391936 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 16:22:51.396893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 16:22:51.400005 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 16:22:51.402577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 16:22:51.405044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 16:22:51.419423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 16:22:51.444278 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 4 16:22:51.444301 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 4 16:22:51.446571 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 16:22:51.446607 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 16:22:51.446862 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 16:22:51.447085 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 16:22:51.447916 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 16:22:51.448159 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Sep 4 16:22:51.448231 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Sep 4 16:22:51.449708 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 16:22:51.453504 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 16:22:51.454726 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:22:51.454736 systemd-tmpfiles[1427]: Skipping /boot Sep 4 16:22:51.465444 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 16:22:51.465455 systemd-tmpfiles[1427]: Skipping /boot Sep 4 16:22:51.473781 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 16:22:51.505124 systemd-udevd[1432]: Using default interface naming scheme 'v257'. Sep 4 16:22:51.579518 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 16:22:51.581220 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 16:22:51.582393 systemd-resolved[1424]: Positive Trust Anchors: Sep 4 16:22:51.582407 systemd-resolved[1424]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 16:22:51.582411 systemd-resolved[1424]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Sep 4 16:22:51.582442 systemd-resolved[1424]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 16:22:51.586226 systemd-resolved[1424]: Defaulting to hostname 'linux'. Sep 4 16:22:51.587560 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 16:22:51.612493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 16:22:51.615559 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 16:22:51.618063 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:22:51.624099 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 16:22:51.626617 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 16:22:51.633460 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 16:22:51.638660 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 16:22:51.641182 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 16:22:51.657577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 16:22:51.689276 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 16:22:51.728961 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 16:22:51.751143 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 16:22:51.767395 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 16:22:51.834351 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 4 16:22:51.881372 kernel: ACPI: button: Power Button [PWRF] Sep 4 16:22:51.885952 systemd-networkd[1478]: lo: Link UP Sep 4 16:22:51.886250 systemd-networkd[1478]: lo: Gained carrier Sep 4 16:22:51.889169 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 16:22:51.889739 systemd-networkd[1478]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:22:51.890348 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 16:22:51.897406 systemd-networkd[1478]: eth0: Link UP Sep 4 16:22:51.962947 systemd-networkd[1478]: eth0: Gained carrier Sep 4 16:22:51.963176 systemd-networkd[1478]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Sep 4 16:22:51.964354 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 4 16:22:51.971012 systemd[1]: Reached target network.target - Network. Sep 4 16:22:51.978886 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 16:22:51.982914 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 16:22:51.974484 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 16:22:51.984758 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 16:22:51.987636 systemd-networkd[1478]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 16:22:51.988672 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Sep 4 16:22:51.989993 augenrules[1520]: No rules Sep 4 16:22:53.701855 systemd-resolved[1424]: Clock change detected. Flushing caches. Sep 4 16:22:53.702286 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:22:53.702575 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:22:53.702999 systemd-timesyncd[1425]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 16:22:53.703133 systemd-timesyncd[1425]: Initial clock synchronization to Thu 2025-09-04 16:22:53.701808 UTC. Sep 4 16:22:53.725141 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 16:22:53.727632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 16:22:53.741396 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 16:22:53.766335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:22:53.772599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 16:22:53.795115 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 16:22:53.848197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 16:22:53.848577 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:22:53.910166 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 16:22:53.951366 kernel: kvm_amd: TSC scaling supported Sep 4 16:22:53.951484 kernel: kvm_amd: Nested Virtualization enabled Sep 4 16:22:53.951516 kernel: kvm_amd: Nested Paging enabled Sep 4 16:22:53.951529 kernel: kvm_amd: LBR virtualization supported Sep 4 16:22:53.951542 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 16:22:53.951558 kernel: kvm_amd: Virtual GIF supported Sep 4 16:22:53.968041 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 16:22:53.985903 kernel: EDAC MC: Ver: 3.0.0 Sep 4 16:22:54.154779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 16:22:54.344753 ldconfig[1456]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 16:22:54.816212 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 16:22:54.819169 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 16:22:54.844510 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 16:22:54.845894 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 16:22:54.847191 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 16:22:54.848633 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 16:22:54.850063 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 16:22:54.851594 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 16:22:54.852958 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 16:22:54.854405 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 16:22:54.855838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 16:22:54.855874 systemd[1]: Reached target paths.target - Path Units. Sep 4 16:22:54.856956 systemd[1]: Reached target timers.target - Timer Units. Sep 4 16:22:54.858987 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 16:22:54.861663 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 16:22:54.865039 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 16:22:54.866451 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 16:22:54.867698 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 16:22:54.871646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 16:22:54.873184 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 16:22:54.875009 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 16:22:54.876831 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 16:22:54.877797 systemd[1]: Reached target basic.target - Basic System. Sep 4 16:22:54.878848 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:22:54.878893 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 16:22:54.879970 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 16:22:54.882176 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 16:22:54.885029 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 16:22:54.887621 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 16:22:54.900127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 16:22:54.901337 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 16:22:54.902089 jq[1558]: false Sep 4 16:22:54.902747 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 16:22:54.906043 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 16:22:54.908319 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 16:22:54.912229 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 16:22:54.914533 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 4 16:22:54.914543 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 4 16:22:54.915088 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 16:22:54.920464 extend-filesystems[1559]: Found /dev/vda6 Sep 4 16:22:54.921495 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 16:22:54.922956 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 16:22:54.925813 extend-filesystems[1559]: Found /dev/vda9 Sep 4 16:22:54.928610 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 4 16:22:54.928610 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 16:22:54.928610 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 4 16:22:54.926920 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 16:22:54.925855 oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 4 16:22:54.927661 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 16:22:54.925887 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 16:22:54.925935 oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 4 16:22:54.930464 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 16:22:54.936907 extend-filesystems[1559]: Checking size of /dev/vda9 Sep 4 16:22:54.934071 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 16:22:54.937926 oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 4 16:22:54.941238 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 4 16:22:54.941238 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 16:22:54.935740 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 16:22:54.937940 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 16:22:54.936013 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 16:22:54.936340 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 16:22:54.936588 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 16:22:54.943115 jq[1576]: true Sep 4 16:22:54.941401 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 16:22:54.941703 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 16:22:54.944638 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 16:22:54.947186 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 16:22:54.956687 update_engine[1575]: I20250904 16:22:54.956616 1575 main.cc:92] Flatcar Update Engine starting Sep 4 16:22:55.075858 extend-filesystems[1559]: Resized partition /dev/vda9 Sep 4 16:22:55.076183 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 16:22:55.078509 jq[1591]: true Sep 4 16:22:55.210974 extend-filesystems[1622]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 16:22:55.229994 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 16:22:55.230031 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 16:22:55.232353 tar[1583]: linux-amd64/LICENSE Sep 4 16:22:55.232353 tar[1583]: linux-amd64/helm Sep 4 16:22:55.235312 systemd-logind[1574]: New seat seat0. Sep 4 16:22:55.245000 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 16:22:55.248241 dbus-daemon[1556]: [system] SELinux support is enabled Sep 4 16:22:55.248742 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 16:22:55.256016 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 16:22:55.256116 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 16:22:55.257387 update_engine[1575]: I20250904 16:22:55.257223 1575 update_check_scheduler.cc:74] Next update check in 6m13s Sep 4 16:22:55.257513 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 16:22:55.257535 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 16:22:55.259941 systemd[1]: Started update-engine.service - Update Engine. Sep 4 16:22:55.263938 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 16:22:55.264204 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 16:22:55.378059 systemd-networkd[1478]: eth0: Gained IPv6LL Sep 4 16:22:55.381354 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 16:22:55.383241 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 16:22:55.386416 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 16:22:55.454611 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 16:22:55.754993 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 16:22:55.798161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:22:55.802388 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 16:22:55.960970 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 16:22:55.977466 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 16:22:55.977609 tar[1583]: linux-amd64/README.md Sep 4 16:22:55.977758 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 16:22:55.980945 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 16:22:56.000138 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 16:22:56.019427 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 16:22:56.043064 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 16:22:56.053490 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 16:22:56.055567 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:45604.service - OpenSSH per-connection server daemon (10.0.0.1:45604). Sep 4 16:22:56.074528 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 16:22:56.074841 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 16:22:56.077815 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 16:22:56.095052 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 16:22:56.096942 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 16:22:56.102123 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 16:22:56.106218 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 16:22:56.602519 containerd[1592]: time="2025-09-04T16:22:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 16:22:56.602901 sshd[1659]: Connection closed by authenticating user core 10.0.0.1 port 45604 [preauth] Sep 4 16:22:56.117073 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 16:22:56.442008 systemd[1]: sshd@0-10.0.0.77:22-10.0.0.1:45604.service: Deactivated successfully. Sep 4 16:22:56.603783 containerd[1592]: time="2025-09-04T16:22:56.603711768Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 16:22:56.604708 extend-filesystems[1622]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 16:22:56.604708 extend-filesystems[1622]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 16:22:56.604708 extend-filesystems[1622]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 16:22:56.613676 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Sep 4 16:22:56.607581 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 16:22:56.615855 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Sep 4 16:22:56.607938 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 16:22:56.618276 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 16:22:56.620476 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 16:22:56.623443 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 16:22:56.629557 containerd[1592]: time="2025-09-04T16:22:56.629477428Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.869µs" Sep 4 16:22:56.629557 containerd[1592]: time="2025-09-04T16:22:56.629541598Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 16:22:56.629661 containerd[1592]: time="2025-09-04T16:22:56.629587063Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 16:22:56.629929 containerd[1592]: time="2025-09-04T16:22:56.629892967Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 16:22:56.629929 containerd[1592]: time="2025-09-04T16:22:56.629921921Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 16:22:56.629979 containerd[1592]: time="2025-09-04T16:22:56.629965152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:22:56.630481 containerd[1592]: time="2025-09-04T16:22:56.630442718Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 16:22:56.630481 containerd[1592]: time="2025-09-04T16:22:56.630469588Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:22:56.630874 containerd[1592]: time="2025-09-04T16:22:56.630827519Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 16:22:56.630914 containerd[1592]: time="2025-09-04T16:22:56.630856544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:22:56.630914 containerd[1592]: time="2025-09-04T16:22:56.630897621Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 16:22:56.630914 containerd[1592]: time="2025-09-04T16:22:56.630910926Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 16:22:56.631092 containerd[1592]: time="2025-09-04T16:22:56.631056319Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 16:22:56.631439 containerd[1592]: time="2025-09-04T16:22:56.631403750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:22:56.631475 containerd[1592]: time="2025-09-04T16:22:56.631454034Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 16:22:56.631475 containerd[1592]: time="2025-09-04T16:22:56.631469013Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 16:22:56.631546 containerd[1592]: time="2025-09-04T16:22:56.631516431Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 16:22:56.631958 containerd[1592]: time="2025-09-04T16:22:56.631929406Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 16:22:56.632056 containerd[1592]: time="2025-09-04T16:22:56.632033110Z" level=info msg="metadata content store policy set" policy=shared Sep 4 16:22:56.638407 containerd[1592]: time="2025-09-04T16:22:56.638341110Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 16:22:56.638491 containerd[1592]: time="2025-09-04T16:22:56.638465243Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 16:22:56.638491 containerd[1592]: time="2025-09-04T16:22:56.638483988Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 16:22:56.638557 containerd[1592]: time="2025-09-04T16:22:56.638498876Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 16:22:56.638557 containerd[1592]: time="2025-09-04T16:22:56.638514165Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 16:22:56.638557 containerd[1592]: time="2025-09-04T16:22:56.638533922Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 16:22:56.638557 containerd[1592]: time="2025-09-04T16:22:56.638549972Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 16:22:56.638624 containerd[1592]: time="2025-09-04T16:22:56.638564499Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 16:22:56.638624 containerd[1592]: time="2025-09-04T16:22:56.638578976Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 16:22:56.638624 containerd[1592]: time="2025-09-04T16:22:56.638589276Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 16:22:56.638624 containerd[1592]: time="2025-09-04T16:22:56.638598693Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 16:22:56.638624 containerd[1592]: time="2025-09-04T16:22:56.638612699Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 16:22:56.638876 containerd[1592]: time="2025-09-04T16:22:56.638832492Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 16:22:56.638900 containerd[1592]: time="2025-09-04T16:22:56.638881984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 16:22:56.638900 containerd[1592]: time="2025-09-04T16:22:56.638897734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 16:22:56.638948 containerd[1592]: time="2025-09-04T16:22:56.638911871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 16:22:56.638948 containerd[1592]: time="2025-09-04T16:22:56.638924945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 16:22:56.638948 containerd[1592]: time="2025-09-04T16:22:56.638935565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 16:22:56.638948 containerd[1592]: time="2025-09-04T16:22:56.638947838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 16:22:56.639036 containerd[1592]: time="2025-09-04T16:22:56.638959069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 16:22:56.639036 containerd[1592]: time="2025-09-04T16:22:56.638973035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 16:22:56.639036 containerd[1592]: time="2025-09-04T16:22:56.638991820Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 16:22:56.639036 containerd[1592]: time="2025-09-04T16:22:56.639003633Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 16:22:56.639112 containerd[1592]: time="2025-09-04T16:22:56.639092078Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 16:22:56.639112 containerd[1592]: time="2025-09-04T16:22:56.639106796Z" level=info msg="Start snapshots syncer" Sep 4 16:22:56.639148 containerd[1592]: time="2025-09-04T16:22:56.639136602Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 16:22:56.639449 containerd[1592]: time="2025-09-04T16:22:56.639397421Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 16:22:56.639571 containerd[1592]: time="2025-09-04T16:22:56.639460499Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 16:22:56.642187 containerd[1592]: time="2025-09-04T16:22:56.642151165Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 16:22:56.642343 containerd[1592]: time="2025-09-04T16:22:56.642310955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 16:22:56.642374 containerd[1592]: time="2025-09-04T16:22:56.642347744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 16:22:56.642374 containerd[1592]: time="2025-09-04T16:22:56.642361450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 16:22:56.642374 containerd[1592]: time="2025-09-04T16:22:56.642371148Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 16:22:56.642426 containerd[1592]: time="2025-09-04T16:22:56.642385004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 16:22:56.642426 containerd[1592]: time="2025-09-04T16:22:56.642397067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 16:22:56.642426 containerd[1592]: time="2025-09-04T16:22:56.642408468Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 16:22:56.642493 containerd[1592]: time="2025-09-04T16:22:56.642431772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 16:22:56.642493 containerd[1592]: time="2025-09-04T16:22:56.642443544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 16:22:56.642493 containerd[1592]: time="2025-09-04T16:22:56.642454174Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 16:22:56.642557 containerd[1592]: time="2025-09-04T16:22:56.642498537Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:22:56.642557 containerd[1592]: time="2025-09-04T16:22:56.642512172Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 16:22:56.642557 containerd[1592]: time="2025-09-04T16:22:56.642530156Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:22:56.642557 containerd[1592]: time="2025-09-04T16:22:56.642541928Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 16:22:56.642557 containerd[1592]: time="2025-09-04T16:22:56.642549773Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 16:22:56.642557 containerd[1592]: time="2025-09-04T16:22:56.642559261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 16:22:56.642668 containerd[1592]: time="2025-09-04T16:22:56.642573487Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 16:22:56.642668 containerd[1592]: time="2025-09-04T16:22:56.642596601Z" level=info msg="runtime interface created" Sep 4 16:22:56.642668 containerd[1592]: time="2025-09-04T16:22:56.642601951Z" level=info msg="created NRI interface" Sep 4 16:22:56.642668 containerd[1592]: time="2025-09-04T16:22:56.642610377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 16:22:56.642668 containerd[1592]: time="2025-09-04T16:22:56.642621978Z" level=info msg="Connect containerd service" Sep 4 16:22:56.642668 containerd[1592]: time="2025-09-04T16:22:56.642644601Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 16:22:56.643594 containerd[1592]: time="2025-09-04T16:22:56.643562713Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 16:22:56.871457 containerd[1592]: time="2025-09-04T16:22:56.871331450Z" level=info msg="Start subscribing containerd event" Sep 4 16:22:56.871579 containerd[1592]: time="2025-09-04T16:22:56.871427811Z" level=info msg="Start recovering state" Sep 4 16:22:56.871602 containerd[1592]: time="2025-09-04T16:22:56.871571801Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 16:22:56.871641 containerd[1592]: time="2025-09-04T16:22:56.871607648Z" level=info msg="Start event monitor" Sep 4 16:22:56.871641 containerd[1592]: time="2025-09-04T16:22:56.871630942Z" level=info msg="Start cni network conf syncer for default" Sep 4 16:22:56.871678 containerd[1592]: time="2025-09-04T16:22:56.871636132Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 16:22:56.871697 containerd[1592]: time="2025-09-04T16:22:56.871654676Z" level=info msg="Start streaming server" Sep 4 16:22:56.871716 containerd[1592]: time="2025-09-04T16:22:56.871705371Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 16:22:56.871735 containerd[1592]: time="2025-09-04T16:22:56.871714819Z" level=info msg="runtime interface starting up..." Sep 4 16:22:56.871735 containerd[1592]: time="2025-09-04T16:22:56.871722814Z" level=info msg="starting plugins..." Sep 4 16:22:56.871777 containerd[1592]: time="2025-09-04T16:22:56.871747751Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 16:22:56.872160 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 16:22:56.872331 containerd[1592]: time="2025-09-04T16:22:56.872313432Z" level=info msg="containerd successfully booted in 0.568614s" Sep 4 16:22:57.916692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:22:57.918288 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 16:22:57.919505 systemd[1]: Startup finished in 3.283s (kernel) + 8.363s (initrd) + 7.290s (userspace) = 18.937s. Sep 4 16:22:57.941234 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:22:58.546655 kubelet[1704]: E0904 16:22:58.546577 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:22:58.550459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:22:58.550670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:22:58.551056 systemd[1]: kubelet.service: Consumed 1.789s CPU time, 265.9M memory peak. Sep 4 16:23:06.452330 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:41024.service - OpenSSH per-connection server daemon (10.0.0.1:41024). Sep 4 16:23:06.503581 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 41024 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:23:06.505464 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:23:06.511968 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 16:23:06.513036 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 16:23:06.518936 systemd-logind[1574]: New session 1 of user core. Sep 4 16:23:06.534051 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 16:23:06.537018 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 16:23:06.559011 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 16:23:06.561528 systemd-logind[1574]: New session c1 of user core. Sep 4 16:23:06.695448 systemd[1722]: Queued start job for default target default.target. Sep 4 16:23:06.712171 systemd[1722]: Created slice app.slice - User Application Slice. Sep 4 16:23:06.712198 systemd[1722]: Reached target paths.target - Paths. Sep 4 16:23:06.712249 systemd[1722]: Reached target timers.target - Timers. Sep 4 16:23:06.713728 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 16:23:06.724046 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 16:23:06.724169 systemd[1722]: Reached target sockets.target - Sockets. Sep 4 16:23:06.724208 systemd[1722]: Reached target basic.target - Basic System. Sep 4 16:23:06.724256 systemd[1722]: Reached target default.target - Main User Target. Sep 4 16:23:06.724288 systemd[1722]: Startup finished in 156ms. Sep 4 16:23:06.724582 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 16:23:06.726300 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 16:23:06.788704 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:41038.service - OpenSSH per-connection server daemon (10.0.0.1:41038). Sep 4 16:23:06.850680 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 41038 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:23:06.852113 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:23:06.856320 systemd-logind[1574]: New session 2 of user core. Sep 4 16:23:06.866989 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 16:23:06.920106 sshd[1736]: Connection closed by 10.0.0.1 port 41038 Sep 4 16:23:06.920539 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 4 16:23:06.929453 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:41038.service: Deactivated successfully. Sep 4 16:23:06.931208 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 16:23:06.931918 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Sep 4 16:23:06.934485 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:41046.service - OpenSSH per-connection server daemon (10.0.0.1:41046). Sep 4 16:23:06.935253 systemd-logind[1574]: Removed session 2. Sep 4 16:23:06.987241 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 41046 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:23:06.988700 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:23:06.993413 systemd-logind[1574]: New session 3 of user core. Sep 4 16:23:07.007019 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 16:23:07.056827 sshd[1745]: Connection closed by 10.0.0.1 port 41046 Sep 4 16:23:07.057213 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 4 16:23:07.076545 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:41046.service: Deactivated successfully. Sep 4 16:23:07.078335 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 16:23:07.079044 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Sep 4 16:23:07.081698 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:41060.service - OpenSSH per-connection server daemon (10.0.0.1:41060). Sep 4 16:23:07.082323 systemd-logind[1574]: Removed session 3. Sep 4 16:23:07.142489 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 41060 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:23:07.143979 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:23:07.148473 systemd-logind[1574]: New session 4 of user core. Sep 4 16:23:07.157992 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 16:23:07.211487 sshd[1754]: Connection closed by 10.0.0.1 port 41060 Sep 4 16:23:07.211913 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 4 16:23:07.224531 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:41060.service: Deactivated successfully. Sep 4 16:23:07.226398 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 16:23:07.227125 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Sep 4 16:23:07.229776 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:41072.service - OpenSSH per-connection server daemon (10.0.0.1:41072). Sep 4 16:23:07.230375 systemd-logind[1574]: Removed session 4. Sep 4 16:23:07.285414 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 41072 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:23:07.286736 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:23:07.291086 systemd-logind[1574]: New session 5 of user core. Sep 4 16:23:07.301000 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 16:23:07.359472 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 16:23:07.359770 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:23:07.376824 sudo[1764]: pam_unix(sudo:session): session closed for user root Sep 4 16:23:07.378680 sshd[1763]: Connection closed by 10.0.0.1 port 41072 Sep 4 16:23:07.379068 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Sep 4 16:23:07.395727 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:41072.service: Deactivated successfully. Sep 4 16:23:07.397650 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 16:23:07.398430 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Sep 4 16:23:07.401424 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:41074.service - OpenSSH per-connection server daemon (10.0.0.1:41074). Sep 4 16:23:07.402056 systemd-logind[1574]: Removed session 5. Sep 4 16:23:07.457892 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 41074 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:23:07.459569 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:23:07.463921 systemd-logind[1574]: New session 6 of user core. Sep 4 16:23:07.481991 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 16:23:07.536247 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 16:23:07.536558 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:23:07.542986 sudo[1775]: pam_unix(sudo:session): session closed for user root Sep 4 16:23:07.550616 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 16:23:07.550932 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:23:07.560982 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 16:23:07.605716 augenrules[1797]: No rules Sep 4 16:23:07.607449 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 16:23:07.607739 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 16:23:07.608941 sudo[1774]: pam_unix(sudo:session): session closed for user root Sep 4 16:23:07.610651 sshd[1773]: Connection closed by 10.0.0.1 port 41074 Sep 4 16:23:07.611161 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Sep 4 16:23:07.620485 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:41074.service: Deactivated successfully. Sep 4 16:23:07.622272 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 16:23:07.622951 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Sep 4 16:23:07.625464 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:41076.service - OpenSSH per-connection server daemon (10.0.0.1:41076). Sep 4 16:23:07.626033 systemd-logind[1574]: Removed session 6. Sep 4 16:23:07.678934 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 41076 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:23:07.680160 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:23:07.684534 systemd-logind[1574]: New session 7 of user core. Sep 4 16:23:07.699968 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 16:23:07.752945 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 16:23:07.753244 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 16:23:08.046815 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 16:23:08.061135 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 16:23:08.276120 dockerd[1833]: time="2025-09-04T16:23:08.276047694Z" level=info msg="Starting up" Sep 4 16:23:08.276743 dockerd[1833]: time="2025-09-04T16:23:08.276709325Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 16:23:08.289421 dockerd[1833]: time="2025-09-04T16:23:08.289384916Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 16:23:08.340922 dockerd[1833]: time="2025-09-04T16:23:08.340794047Z" level=info msg="Loading containers: start." Sep 4 16:23:08.350884 kernel: Initializing XFRM netlink socket Sep 4 16:23:08.557562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 16:23:08.559344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:23:08.589121 systemd-networkd[1478]: docker0: Link UP Sep 4 16:23:08.594200 dockerd[1833]: time="2025-09-04T16:23:08.594104484Z" level=info msg="Loading containers: done." Sep 4 16:23:08.607437 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1265661858-merged.mount: Deactivated successfully. Sep 4 16:23:08.767813 dockerd[1833]: time="2025-09-04T16:23:08.767735254Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 16:23:08.768028 dockerd[1833]: time="2025-09-04T16:23:08.767837976Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 16:23:08.768028 dockerd[1833]: time="2025-09-04T16:23:08.767991254Z" level=info msg="Initializing buildkit" Sep 4 16:23:08.838915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:23:08.842783 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:23:08.960789 kubelet[2026]: E0904 16:23:08.960671 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:23:08.966822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:23:08.967026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:23:08.967402 systemd[1]: kubelet.service: Consumed 281ms CPU time, 111.1M memory peak. Sep 4 16:23:08.987221 dockerd[1833]: time="2025-09-04T16:23:08.987187866Z" level=info msg="Completed buildkit initialization" Sep 4 16:23:08.993031 dockerd[1833]: time="2025-09-04T16:23:08.992996569Z" level=info msg="Daemon has completed initialization" Sep 4 16:23:08.993101 dockerd[1833]: time="2025-09-04T16:23:08.993065859Z" level=info msg="API listen on /run/docker.sock" Sep 4 16:23:08.993274 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 16:23:09.763536 containerd[1592]: time="2025-09-04T16:23:09.763486974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 16:23:10.385731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472807415.mount: Deactivated successfully. Sep 4 16:23:11.587836 containerd[1592]: time="2025-09-04T16:23:11.587777199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:11.588771 containerd[1592]: time="2025-09-04T16:23:11.588723223Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 16:23:11.589637 containerd[1592]: time="2025-09-04T16:23:11.589591992Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:11.593397 containerd[1592]: time="2025-09-04T16:23:11.593366471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:11.594922 containerd[1592]: time="2025-09-04T16:23:11.594847939Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 1.831316723s" Sep 4 16:23:11.594983 containerd[1592]: time="2025-09-04T16:23:11.594919554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 16:23:11.598409 containerd[1592]: time="2025-09-04T16:23:11.598252955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 16:23:13.075001 containerd[1592]: time="2025-09-04T16:23:13.074938798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:13.075656 containerd[1592]: time="2025-09-04T16:23:13.075605608Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 16:23:13.076884 containerd[1592]: time="2025-09-04T16:23:13.076824755Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:13.079164 containerd[1592]: time="2025-09-04T16:23:13.079126792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:13.079941 containerd[1592]: time="2025-09-04T16:23:13.079905142Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.48160608s" Sep 4 16:23:13.079977 containerd[1592]: time="2025-09-04T16:23:13.079940027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 16:23:13.080488 containerd[1592]: time="2025-09-04T16:23:13.080456095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 16:23:14.900361 containerd[1592]: time="2025-09-04T16:23:14.900294101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:14.901159 containerd[1592]: time="2025-09-04T16:23:14.901095714Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 16:23:14.902468 containerd[1592]: time="2025-09-04T16:23:14.902422372Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:14.904728 containerd[1592]: time="2025-09-04T16:23:14.904696437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:14.905596 containerd[1592]: time="2025-09-04T16:23:14.905560247Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.825071661s" Sep 4 16:23:14.905655 containerd[1592]: time="2025-09-04T16:23:14.905597306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 16:23:14.906090 containerd[1592]: time="2025-09-04T16:23:14.906071235Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 16:23:16.249927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575020513.mount: Deactivated successfully. Sep 4 16:23:16.636670 containerd[1592]: time="2025-09-04T16:23:16.636608580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:16.637399 containerd[1592]: time="2025-09-04T16:23:16.637367944Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 16:23:16.638469 containerd[1592]: time="2025-09-04T16:23:16.638437440Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:16.640584 containerd[1592]: time="2025-09-04T16:23:16.640533681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:16.641049 containerd[1592]: time="2025-09-04T16:23:16.641018530Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.734919003s" Sep 4 16:23:16.641088 containerd[1592]: time="2025-09-04T16:23:16.641055830Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 16:23:16.641581 containerd[1592]: time="2025-09-04T16:23:16.641528206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 16:23:17.173829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237219512.mount: Deactivated successfully. Sep 4 16:23:18.432689 containerd[1592]: time="2025-09-04T16:23:18.432586800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:18.433797 containerd[1592]: time="2025-09-04T16:23:18.433550738Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 16:23:18.437960 containerd[1592]: time="2025-09-04T16:23:18.437816608Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:18.441013 containerd[1592]: time="2025-09-04T16:23:18.440944865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:18.442183 containerd[1592]: time="2025-09-04T16:23:18.442120139Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.800547749s" Sep 4 16:23:18.442251 containerd[1592]: time="2025-09-04T16:23:18.442185111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 16:23:18.442894 containerd[1592]: time="2025-09-04T16:23:18.442709394Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 16:23:18.959984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037829552.mount: Deactivated successfully. Sep 4 16:23:18.965487 containerd[1592]: time="2025-09-04T16:23:18.965437536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:23:18.966160 containerd[1592]: time="2025-09-04T16:23:18.966100499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 16:23:18.967486 containerd[1592]: time="2025-09-04T16:23:18.967448888Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:23:18.969482 containerd[1592]: time="2025-09-04T16:23:18.969442015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 16:23:18.969883 containerd[1592]: time="2025-09-04T16:23:18.969835183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 527.096694ms" Sep 4 16:23:18.969883 containerd[1592]: time="2025-09-04T16:23:18.969862574Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 16:23:18.970346 containerd[1592]: time="2025-09-04T16:23:18.970317668Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 16:23:19.014797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 16:23:19.016881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:23:19.228300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:23:19.251225 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 16:23:19.307819 kubelet[2204]: E0904 16:23:19.307740 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 16:23:19.312233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 16:23:19.312432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 16:23:19.312825 systemd[1]: kubelet.service: Consumed 232ms CPU time, 110.5M memory peak. Sep 4 16:23:19.848074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942947202.mount: Deactivated successfully. Sep 4 16:23:22.050148 containerd[1592]: time="2025-09-04T16:23:22.050060084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:22.051347 containerd[1592]: time="2025-09-04T16:23:22.051282777Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 16:23:22.055223 containerd[1592]: time="2025-09-04T16:23:22.055160599Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:22.057733 containerd[1592]: time="2025-09-04T16:23:22.057698088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:22.059154 containerd[1592]: time="2025-09-04T16:23:22.059124623Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.088780796s" Sep 4 16:23:22.059154 containerd[1592]: time="2025-09-04T16:23:22.059151975Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 16:23:24.212309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:23:24.212559 systemd[1]: kubelet.service: Consumed 232ms CPU time, 110.5M memory peak. Sep 4 16:23:24.215938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:23:24.244901 systemd[1]: Reload requested from client PID 2297 ('systemctl') (unit session-7.scope)... Sep 4 16:23:24.244916 systemd[1]: Reloading... Sep 4 16:23:24.327893 zram_generator::config[2339]: No configuration found. Sep 4 16:23:24.667845 systemd[1]: Reloading finished in 422 ms. Sep 4 16:23:24.737491 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 16:23:24.737590 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 16:23:24.737916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:23:24.737958 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.3M memory peak. Sep 4 16:23:24.739518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:23:24.991662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:23:25.004242 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:23:25.046656 kubelet[2388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:23:25.046656 kubelet[2388]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 16:23:25.046656 kubelet[2388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:23:25.046656 kubelet[2388]: I0904 16:23:25.046084 2388 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:23:25.501212 kubelet[2388]: I0904 16:23:25.501160 2388 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 16:23:25.501212 kubelet[2388]: I0904 16:23:25.501189 2388 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:23:25.501482 kubelet[2388]: I0904 16:23:25.501458 2388 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 16:23:25.522185 kubelet[2388]: E0904 16:23:25.522142 2388 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:23:25.525383 kubelet[2388]: I0904 16:23:25.525334 2388 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:23:25.533917 kubelet[2388]: I0904 16:23:25.533103 2388 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:23:25.538154 kubelet[2388]: I0904 16:23:25.538118 2388 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:23:25.538496 kubelet[2388]: I0904 16:23:25.538455 2388 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:23:25.538689 kubelet[2388]: I0904 16:23:25.538487 2388 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:23:25.538800 kubelet[2388]: I0904 16:23:25.538715 2388 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:23:25.538800 kubelet[2388]: I0904 16:23:25.538725 2388 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 16:23:25.538938 kubelet[2388]: I0904 16:23:25.538919 2388 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:23:25.541466 kubelet[2388]: I0904 16:23:25.541437 2388 kubelet.go:446] "Attempting to sync node with API server" Sep 4 16:23:25.541498 kubelet[2388]: I0904 16:23:25.541470 2388 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:23:25.541528 kubelet[2388]: I0904 16:23:25.541503 2388 kubelet.go:352] "Adding apiserver pod source" Sep 4 16:23:25.541528 kubelet[2388]: I0904 16:23:25.541521 2388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:23:25.544136 kubelet[2388]: I0904 16:23:25.544102 2388 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:23:25.544497 kubelet[2388]: I0904 16:23:25.544471 2388 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:23:25.546449 kubelet[2388]: W0904 16:23:25.545488 2388 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 16:23:25.546449 kubelet[2388]: W0904 16:23:25.546110 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 4 16:23:25.546449 kubelet[2388]: E0904 16:23:25.546170 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:23:25.546449 kubelet[2388]: W0904 16:23:25.546368 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 4 16:23:25.546449 kubelet[2388]: E0904 16:23:25.546411 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:23:25.547784 kubelet[2388]: I0904 16:23:25.547755 2388 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 16:23:25.547825 kubelet[2388]: I0904 16:23:25.547799 2388 server.go:1287] "Started kubelet" Sep 4 16:23:25.550892 kubelet[2388]: I0904 16:23:25.549086 2388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:23:25.550892 kubelet[2388]: I0904 16:23:25.549306 2388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:23:25.550892 kubelet[2388]: I0904 16:23:25.549432 2388 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:23:25.550892 kubelet[2388]: I0904 16:23:25.549488 2388 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:23:25.550892 kubelet[2388]: I0904 16:23:25.550399 2388 server.go:479] "Adding debug handlers to kubelet server" Sep 4 16:23:25.551484 kubelet[2388]: E0904 16:23:25.551469 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:25.551614 kubelet[2388]: I0904 16:23:25.551603 2388 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 16:23:25.551977 kubelet[2388]: I0904 16:23:25.551965 2388 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 16:23:25.552059 kubelet[2388]: I0904 16:23:25.551605 2388 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:23:25.552388 kubelet[2388]: I0904 16:23:25.552376 2388 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:23:25.552886 kubelet[2388]: E0904 16:23:25.551929 2388 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186220ecaef804c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 16:23:25.547775168 +0000 UTC m=+0.539506388,LastTimestamp:2025-09-04 16:23:25.547775168 +0000 UTC m=+0.539506388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 16:23:25.552886 kubelet[2388]: W0904 16:23:25.552837 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 4 16:23:25.553003 kubelet[2388]: E0904 16:23:25.552915 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:23:25.553223 kubelet[2388]: I0904 16:23:25.553206 2388 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:23:25.553334 kubelet[2388]: I0904 16:23:25.553319 2388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:23:25.553466 kubelet[2388]: E0904 16:23:25.553322 2388 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:23:25.553466 kubelet[2388]: E0904 16:23:25.553202 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Sep 4 16:23:25.554491 kubelet[2388]: I0904 16:23:25.554467 2388 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:23:25.569758 kubelet[2388]: I0904 16:23:25.569508 2388 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 16:23:25.569758 kubelet[2388]: I0904 16:23:25.569525 2388 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 16:23:25.569758 kubelet[2388]: I0904 16:23:25.569544 2388 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:23:25.571715 kubelet[2388]: I0904 16:23:25.571663 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:23:25.572710 kubelet[2388]: I0904 16:23:25.572474 2388 policy_none.go:49] "None policy: Start" Sep 4 16:23:25.572710 kubelet[2388]: I0904 16:23:25.572500 2388 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 16:23:25.572710 kubelet[2388]: I0904 16:23:25.572519 2388 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:23:25.573236 kubelet[2388]: I0904 16:23:25.573202 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:23:25.573674 kubelet[2388]: I0904 16:23:25.573263 2388 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 16:23:25.573674 kubelet[2388]: I0904 16:23:25.573426 2388 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 16:23:25.573674 kubelet[2388]: I0904 16:23:25.573435 2388 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 16:23:25.573674 kubelet[2388]: E0904 16:23:25.573487 2388 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:23:25.576564 kubelet[2388]: W0904 16:23:25.576520 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 4 16:23:25.576622 kubelet[2388]: E0904 16:23:25.576573 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:23:25.579993 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 16:23:25.592647 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 16:23:25.595938 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 16:23:25.614916 kubelet[2388]: I0904 16:23:25.614893 2388 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:23:25.615149 kubelet[2388]: I0904 16:23:25.615131 2388 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:23:25.615181 kubelet[2388]: I0904 16:23:25.615147 2388 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:23:25.615600 kubelet[2388]: I0904 16:23:25.615385 2388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:23:25.616251 kubelet[2388]: E0904 16:23:25.616231 2388 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 16:23:25.616371 kubelet[2388]: E0904 16:23:25.616289 2388 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 16:23:25.682268 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 16:23:25.696303 kubelet[2388]: E0904 16:23:25.696267 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:25.699230 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 16:23:25.707045 kubelet[2388]: E0904 16:23:25.707008 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:25.709629 systemd[1]: Created slice kubepods-burstable-pod763247a17418e1c1dfcac4529a6f872b.slice - libcontainer container kubepods-burstable-pod763247a17418e1c1dfcac4529a6f872b.slice. Sep 4 16:23:25.711371 kubelet[2388]: E0904 16:23:25.711335 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:25.716128 kubelet[2388]: I0904 16:23:25.716091 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:23:25.716467 kubelet[2388]: E0904 16:23:25.716436 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 4 16:23:25.755152 kubelet[2388]: I0904 16:23:25.753897 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/763247a17418e1c1dfcac4529a6f872b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"763247a17418e1c1dfcac4529a6f872b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:25.755152 kubelet[2388]: I0904 16:23:25.753927 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/763247a17418e1c1dfcac4529a6f872b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"763247a17418e1c1dfcac4529a6f872b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:25.755152 kubelet[2388]: I0904 16:23:25.753948 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:25.755152 kubelet[2388]: I0904 16:23:25.753964 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:25.755152 kubelet[2388]: I0904 16:23:25.753983 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:25.755287 kubelet[2388]: I0904 16:23:25.754000 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:25.755287 kubelet[2388]: I0904 16:23:25.754015 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:23:25.755287 kubelet[2388]: I0904 16:23:25.754032 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/763247a17418e1c1dfcac4529a6f872b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"763247a17418e1c1dfcac4529a6f872b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:25.755287 kubelet[2388]: I0904 16:23:25.754047 2388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:25.755287 kubelet[2388]: E0904 16:23:25.754155 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Sep 4 16:23:25.917597 kubelet[2388]: I0904 16:23:25.917552 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:23:25.917997 kubelet[2388]: E0904 16:23:25.917963 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 4 16:23:25.997812 kubelet[2388]: E0904 16:23:25.997773 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:25.998445 containerd[1592]: time="2025-09-04T16:23:25.998401784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 16:23:26.008072 kubelet[2388]: E0904 16:23:26.007629 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:26.008281 containerd[1592]: time="2025-09-04T16:23:26.008044322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 16:23:26.012686 kubelet[2388]: E0904 16:23:26.012644 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:26.013091 containerd[1592]: time="2025-09-04T16:23:26.013040248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:763247a17418e1c1dfcac4529a6f872b,Namespace:kube-system,Attempt:0,}" Sep 4 16:23:26.033367 containerd[1592]: time="2025-09-04T16:23:26.033093466Z" level=info msg="connecting to shim f960789aacbd7040b925348d5f9689f4e9366b1051b46b66239583a82a05f6e9" address="unix:///run/containerd/s/16b977457f5142b4f2068e4ab7b36a929766f4dd19f771c0ed7e7b8c6805683f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:23:26.049146 containerd[1592]: time="2025-09-04T16:23:26.049086178Z" level=info msg="connecting to shim 4204dff150652d2941f8294e43d4ea38497c6f02d2b3ec9dba1811d00fc15b33" address="unix:///run/containerd/s/8b69f181ff993e594d836c47966c62a123ef5475712a8df79d30b5848133aaa9" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:23:26.061210 containerd[1592]: time="2025-09-04T16:23:26.061167573Z" level=info msg="connecting to shim e6ebfc0dcccd968ce40804d39ef8e284cf7a0c41bdfe0b6d8ff766bf05c67d56" address="unix:///run/containerd/s/e6395fc363406ff44acc0f2b5809c1742200f824148dd0f9b9039609c6d09030" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:23:26.069021 systemd[1]: Started cri-containerd-f960789aacbd7040b925348d5f9689f4e9366b1051b46b66239583a82a05f6e9.scope - libcontainer container f960789aacbd7040b925348d5f9689f4e9366b1051b46b66239583a82a05f6e9. Sep 4 16:23:26.155091 kubelet[2388]: E0904 16:23:26.155055 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Sep 4 16:23:26.173005 systemd[1]: Started cri-containerd-4204dff150652d2941f8294e43d4ea38497c6f02d2b3ec9dba1811d00fc15b33.scope - libcontainer container 4204dff150652d2941f8294e43d4ea38497c6f02d2b3ec9dba1811d00fc15b33. Sep 4 16:23:26.178111 systemd[1]: Started cri-containerd-e6ebfc0dcccd968ce40804d39ef8e284cf7a0c41bdfe0b6d8ff766bf05c67d56.scope - libcontainer container e6ebfc0dcccd968ce40804d39ef8e284cf7a0c41bdfe0b6d8ff766bf05c67d56. Sep 4 16:23:26.199211 containerd[1592]: time="2025-09-04T16:23:26.199182153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"f960789aacbd7040b925348d5f9689f4e9366b1051b46b66239583a82a05f6e9\"" Sep 4 16:23:26.201178 kubelet[2388]: E0904 16:23:26.201148 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:26.204968 containerd[1592]: time="2025-09-04T16:23:26.204924193Z" level=info msg="CreateContainer within sandbox \"f960789aacbd7040b925348d5f9689f4e9366b1051b46b66239583a82a05f6e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 16:23:26.220084 containerd[1592]: time="2025-09-04T16:23:26.220058781Z" level=info msg="Container ed5a47912a584b196a5203b2a70cf2aa2df4f5c340817f1b967943bcf1c89ab2: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:23:26.231611 containerd[1592]: time="2025-09-04T16:23:26.231377026Z" level=info msg="CreateContainer within sandbox \"f960789aacbd7040b925348d5f9689f4e9366b1051b46b66239583a82a05f6e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed5a47912a584b196a5203b2a70cf2aa2df4f5c340817f1b967943bcf1c89ab2\"" Sep 4 16:23:26.232464 containerd[1592]: time="2025-09-04T16:23:26.232428214Z" level=info msg="StartContainer for \"ed5a47912a584b196a5203b2a70cf2aa2df4f5c340817f1b967943bcf1c89ab2\"" Sep 4 16:23:26.235132 containerd[1592]: time="2025-09-04T16:23:26.235095670Z" level=info msg="connecting to shim ed5a47912a584b196a5203b2a70cf2aa2df4f5c340817f1b967943bcf1c89ab2" address="unix:///run/containerd/s/16b977457f5142b4f2068e4ab7b36a929766f4dd19f771c0ed7e7b8c6805683f" protocol=ttrpc version=3 Sep 4 16:23:26.236361 containerd[1592]: time="2025-09-04T16:23:26.236321072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:763247a17418e1c1dfcac4529a6f872b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6ebfc0dcccd968ce40804d39ef8e284cf7a0c41bdfe0b6d8ff766bf05c67d56\"" Sep 4 16:23:26.236939 kubelet[2388]: E0904 16:23:26.236907 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:26.239857 containerd[1592]: time="2025-09-04T16:23:26.239826573Z" level=info msg="CreateContainer within sandbox \"e6ebfc0dcccd968ce40804d39ef8e284cf7a0c41bdfe0b6d8ff766bf05c67d56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 16:23:26.241391 containerd[1592]: time="2025-09-04T16:23:26.241368239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4204dff150652d2941f8294e43d4ea38497c6f02d2b3ec9dba1811d00fc15b33\"" Sep 4 16:23:26.241845 kubelet[2388]: E0904 16:23:26.241781 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:26.242961 containerd[1592]: time="2025-09-04T16:23:26.242936652Z" level=info msg="CreateContainer within sandbox \"4204dff150652d2941f8294e43d4ea38497c6f02d2b3ec9dba1811d00fc15b33\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 16:23:26.252108 containerd[1592]: time="2025-09-04T16:23:26.252083353Z" level=info msg="Container 88307f2125ab4bbdf840f74b9522516256e04b990cfd1a5e1d23d2bcd42bfccb: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:23:26.261198 containerd[1592]: time="2025-09-04T16:23:26.261117393Z" level=info msg="Container dadeb4acf4f2660304222cef0266544fee6b4c9023a2ae78059f76fea1d743c3: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:23:26.264506 containerd[1592]: time="2025-09-04T16:23:26.264468132Z" level=info msg="CreateContainer within sandbox \"4204dff150652d2941f8294e43d4ea38497c6f02d2b3ec9dba1811d00fc15b33\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"88307f2125ab4bbdf840f74b9522516256e04b990cfd1a5e1d23d2bcd42bfccb\"" Sep 4 16:23:26.265060 containerd[1592]: time="2025-09-04T16:23:26.264858260Z" level=info msg="StartContainer for \"88307f2125ab4bbdf840f74b9522516256e04b990cfd1a5e1d23d2bcd42bfccb\"" Sep 4 16:23:26.265151 systemd[1]: Started cri-containerd-ed5a47912a584b196a5203b2a70cf2aa2df4f5c340817f1b967943bcf1c89ab2.scope - libcontainer container ed5a47912a584b196a5203b2a70cf2aa2df4f5c340817f1b967943bcf1c89ab2. Sep 4 16:23:26.266821 containerd[1592]: time="2025-09-04T16:23:26.266781827Z" level=info msg="connecting to shim 88307f2125ab4bbdf840f74b9522516256e04b990cfd1a5e1d23d2bcd42bfccb" address="unix:///run/containerd/s/8b69f181ff993e594d836c47966c62a123ef5475712a8df79d30b5848133aaa9" protocol=ttrpc version=3 Sep 4 16:23:26.273560 containerd[1592]: time="2025-09-04T16:23:26.273509368Z" level=info msg="CreateContainer within sandbox \"e6ebfc0dcccd968ce40804d39ef8e284cf7a0c41bdfe0b6d8ff766bf05c67d56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dadeb4acf4f2660304222cef0266544fee6b4c9023a2ae78059f76fea1d743c3\"" Sep 4 16:23:26.274245 containerd[1592]: time="2025-09-04T16:23:26.274222819Z" level=info msg="StartContainer for \"dadeb4acf4f2660304222cef0266544fee6b4c9023a2ae78059f76fea1d743c3\"" Sep 4 16:23:26.275606 containerd[1592]: time="2025-09-04T16:23:26.275426969Z" level=info msg="connecting to shim dadeb4acf4f2660304222cef0266544fee6b4c9023a2ae78059f76fea1d743c3" address="unix:///run/containerd/s/e6395fc363406ff44acc0f2b5809c1742200f824148dd0f9b9039609c6d09030" protocol=ttrpc version=3 Sep 4 16:23:26.319951 kubelet[2388]: I0904 16:23:26.319917 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:23:26.320365 kubelet[2388]: E0904 16:23:26.320328 2388 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 4 16:23:26.369997 systemd[1]: Started cri-containerd-88307f2125ab4bbdf840f74b9522516256e04b990cfd1a5e1d23d2bcd42bfccb.scope - libcontainer container 88307f2125ab4bbdf840f74b9522516256e04b990cfd1a5e1d23d2bcd42bfccb. Sep 4 16:23:26.387042 systemd[1]: Started cri-containerd-dadeb4acf4f2660304222cef0266544fee6b4c9023a2ae78059f76fea1d743c3.scope - libcontainer container dadeb4acf4f2660304222cef0266544fee6b4c9023a2ae78059f76fea1d743c3. Sep 4 16:23:26.396410 kubelet[2388]: W0904 16:23:26.396344 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 4 16:23:26.396493 kubelet[2388]: E0904 16:23:26.396438 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:23:26.409570 kubelet[2388]: W0904 16:23:26.409491 2388 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 4 16:23:26.409644 kubelet[2388]: E0904 16:23:26.409579 2388 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 4 16:23:26.410754 containerd[1592]: time="2025-09-04T16:23:26.410717165Z" level=info msg="StartContainer for \"ed5a47912a584b196a5203b2a70cf2aa2df4f5c340817f1b967943bcf1c89ab2\" returns successfully" Sep 4 16:23:26.441701 containerd[1592]: time="2025-09-04T16:23:26.441601022Z" level=info msg="StartContainer for \"88307f2125ab4bbdf840f74b9522516256e04b990cfd1a5e1d23d2bcd42bfccb\" returns successfully" Sep 4 16:23:26.453957 containerd[1592]: time="2025-09-04T16:23:26.453925489Z" level=info msg="StartContainer for \"dadeb4acf4f2660304222cef0266544fee6b4c9023a2ae78059f76fea1d743c3\" returns successfully" Sep 4 16:23:26.584928 kubelet[2388]: E0904 16:23:26.584889 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:26.585094 kubelet[2388]: E0904 16:23:26.585013 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:26.587135 kubelet[2388]: E0904 16:23:26.587111 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:26.587322 kubelet[2388]: E0904 16:23:26.587298 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:26.589004 kubelet[2388]: E0904 16:23:26.588983 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:26.589091 kubelet[2388]: E0904 16:23:26.589073 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:27.122030 kubelet[2388]: I0904 16:23:27.121991 2388 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:23:27.590921 kubelet[2388]: E0904 16:23:27.590858 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:27.591340 kubelet[2388]: E0904 16:23:27.591017 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:27.591340 kubelet[2388]: E0904 16:23:27.591237 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:27.591340 kubelet[2388]: E0904 16:23:27.591323 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:27.918185 kubelet[2388]: E0904 16:23:27.918048 2388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 16:23:28.021541 kubelet[2388]: I0904 16:23:28.021482 2388 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 16:23:28.021541 kubelet[2388]: E0904 16:23:28.021527 2388 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 16:23:28.030472 kubelet[2388]: E0904 16:23:28.030402 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.066036 kubelet[2388]: E0904 16:23:28.065931 2388 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186220ecaef804c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 16:23:25.547775168 +0000 UTC m=+0.539506388,LastTimestamp:2025-09-04 16:23:25.547775168 +0000 UTC m=+0.539506388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 16:23:28.131003 kubelet[2388]: E0904 16:23:28.130961 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.231860 kubelet[2388]: E0904 16:23:28.231673 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.332775 kubelet[2388]: E0904 16:23:28.332725 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.433309 kubelet[2388]: E0904 16:23:28.433288 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.534252 kubelet[2388]: E0904 16:23:28.534208 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.592220 kubelet[2388]: E0904 16:23:28.592182 2388 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 16:23:28.592558 kubelet[2388]: E0904 16:23:28.592309 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:28.635248 kubelet[2388]: E0904 16:23:28.635199 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.735923 kubelet[2388]: E0904 16:23:28.735893 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.836922 kubelet[2388]: E0904 16:23:28.836781 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:28.937587 kubelet[2388]: E0904 16:23:28.937536 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:29.038429 kubelet[2388]: E0904 16:23:29.038384 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:29.139442 kubelet[2388]: E0904 16:23:29.139303 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:29.240063 kubelet[2388]: E0904 16:23:29.240016 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:29.340810 kubelet[2388]: E0904 16:23:29.340743 2388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:29.454388 kubelet[2388]: I0904 16:23:29.453936 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:29.460670 kubelet[2388]: I0904 16:23:29.460645 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:23:29.463998 kubelet[2388]: I0904 16:23:29.463941 2388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:29.543743 kubelet[2388]: I0904 16:23:29.543699 2388 apiserver.go:52] "Watching apiserver" Sep 4 16:23:29.545678 kubelet[2388]: E0904 16:23:29.545645 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:29.545916 kubelet[2388]: E0904 16:23:29.545738 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:29.552762 kubelet[2388]: I0904 16:23:29.552743 2388 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 16:23:29.593452 kubelet[2388]: E0904 16:23:29.593421 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:29.995471 systemd[1]: Reload requested from client PID 2666 ('systemctl') (unit session-7.scope)... Sep 4 16:23:29.995487 systemd[1]: Reloading... Sep 4 16:23:30.133905 zram_generator::config[2711]: No configuration found. Sep 4 16:23:30.241719 kubelet[2388]: E0904 16:23:30.241686 2388 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:30.367392 systemd[1]: Reloading finished in 371 ms. Sep 4 16:23:30.396190 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:23:30.413070 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 16:23:30.413400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:23:30.413459 systemd[1]: kubelet.service: Consumed 998ms CPU time, 133.1M memory peak. Sep 4 16:23:30.415329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 16:23:30.612533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 16:23:30.623176 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 16:23:30.672564 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:23:30.672564 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 16:23:30.672564 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 16:23:30.672997 kubelet[2755]: I0904 16:23:30.672675 2755 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 16:23:30.679480 kubelet[2755]: I0904 16:23:30.679440 2755 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 16:23:30.679480 kubelet[2755]: I0904 16:23:30.679468 2755 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 16:23:30.679770 kubelet[2755]: I0904 16:23:30.679749 2755 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 16:23:30.681041 kubelet[2755]: I0904 16:23:30.681019 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 16:23:30.683125 kubelet[2755]: I0904 16:23:30.683078 2755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 16:23:30.692619 kubelet[2755]: I0904 16:23:30.692586 2755 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 16:23:30.698376 kubelet[2755]: I0904 16:23:30.698346 2755 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 16:23:30.698696 kubelet[2755]: I0904 16:23:30.698649 2755 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 16:23:30.698912 kubelet[2755]: I0904 16:23:30.698688 2755 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 16:23:30.699009 kubelet[2755]: I0904 16:23:30.698922 2755 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 16:23:30.699009 kubelet[2755]: I0904 16:23:30.698931 2755 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 16:23:30.699009 kubelet[2755]: I0904 16:23:30.698988 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:23:30.699169 kubelet[2755]: I0904 16:23:30.699152 2755 kubelet.go:446] "Attempting to sync node with API server" Sep 4 16:23:30.699196 kubelet[2755]: I0904 16:23:30.699178 2755 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 16:23:30.699227 kubelet[2755]: I0904 16:23:30.699200 2755 kubelet.go:352] "Adding apiserver pod source" Sep 4 16:23:30.699227 kubelet[2755]: I0904 16:23:30.699211 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 16:23:30.700317 kubelet[2755]: I0904 16:23:30.700295 2755 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 16:23:30.700685 kubelet[2755]: I0904 16:23:30.700659 2755 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 16:23:30.701128 kubelet[2755]: I0904 16:23:30.701101 2755 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 16:23:30.701161 kubelet[2755]: I0904 16:23:30.701133 2755 server.go:1287] "Started kubelet" Sep 4 16:23:30.704217 kubelet[2755]: I0904 16:23:30.704064 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 16:23:30.706134 kubelet[2755]: I0904 16:23:30.704352 2755 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 16:23:30.706134 kubelet[2755]: I0904 16:23:30.704384 2755 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 16:23:30.706134 kubelet[2755]: I0904 16:23:30.705156 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 16:23:30.706134 kubelet[2755]: I0904 16:23:30.705196 2755 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 16:23:30.706134 kubelet[2755]: I0904 16:23:30.705274 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 16:23:30.706134 kubelet[2755]: E0904 16:23:30.705721 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 16:23:30.706941 kubelet[2755]: I0904 16:23:30.706923 2755 server.go:479] "Adding debug handlers to kubelet server" Sep 4 16:23:30.713229 kubelet[2755]: I0904 16:23:30.713196 2755 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 16:23:30.713925 kubelet[2755]: I0904 16:23:30.713391 2755 reconciler.go:26] "Reconciler: start to sync state" Sep 4 16:23:30.715832 kubelet[2755]: I0904 16:23:30.715621 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 16:23:30.717508 kubelet[2755]: E0904 16:23:30.717464 2755 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 16:23:30.718091 kubelet[2755]: I0904 16:23:30.718056 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 16:23:30.718251 kubelet[2755]: I0904 16:23:30.718197 2755 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 16:23:30.718307 kubelet[2755]: I0904 16:23:30.718296 2755 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 16:23:30.718377 kubelet[2755]: I0904 16:23:30.718369 2755 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 16:23:30.718534 kubelet[2755]: E0904 16:23:30.718516 2755 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 16:23:30.722395 kubelet[2755]: I0904 16:23:30.722370 2755 factory.go:221] Registration of the systemd container factory successfully Sep 4 16:23:30.722474 kubelet[2755]: I0904 16:23:30.722452 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 16:23:30.725697 kubelet[2755]: I0904 16:23:30.725637 2755 factory.go:221] Registration of the containerd container factory successfully Sep 4 16:23:30.755484 kubelet[2755]: I0904 16:23:30.755456 2755 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 16:23:30.755484 kubelet[2755]: I0904 16:23:30.755471 2755 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 16:23:30.755484 kubelet[2755]: I0904 16:23:30.755488 2755 state_mem.go:36] "Initialized new in-memory state store" Sep 4 16:23:30.755693 kubelet[2755]: I0904 16:23:30.755652 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 16:23:30.755693 kubelet[2755]: I0904 16:23:30.755668 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 16:23:30.755693 kubelet[2755]: I0904 16:23:30.755687 2755 policy_none.go:49] "None policy: Start" Sep 4 16:23:30.755693 kubelet[2755]: I0904 16:23:30.755697 2755 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 16:23:30.755693 kubelet[2755]: I0904 16:23:30.755707 2755 state_mem.go:35] "Initializing new in-memory state store" Sep 4 16:23:30.755964 kubelet[2755]: I0904 16:23:30.755803 2755 state_mem.go:75] "Updated machine memory state" Sep 4 16:23:30.762412 kubelet[2755]: I0904 16:23:30.762389 2755 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 16:23:30.762575 kubelet[2755]: I0904 16:23:30.762559 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 16:23:30.762615 kubelet[2755]: I0904 16:23:30.762575 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 16:23:30.762803 kubelet[2755]: I0904 16:23:30.762768 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 16:23:30.763963 kubelet[2755]: E0904 16:23:30.763938 2755 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 16:23:30.820077 kubelet[2755]: I0904 16:23:30.820044 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:30.820249 kubelet[2755]: I0904 16:23:30.820225 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:23:30.820393 kubelet[2755]: I0904 16:23:30.820364 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:30.825939 kubelet[2755]: E0904 16:23:30.825914 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:30.826226 kubelet[2755]: E0904 16:23:30.826203 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:30.826226 kubelet[2755]: E0904 16:23:30.826209 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 16:23:30.869409 kubelet[2755]: I0904 16:23:30.869373 2755 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 16:23:30.878721 kubelet[2755]: I0904 16:23:30.878547 2755 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 16:23:30.878721 kubelet[2755]: I0904 16:23:30.878684 2755 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 16:23:30.914809 kubelet[2755]: I0904 16:23:30.914771 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/763247a17418e1c1dfcac4529a6f872b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"763247a17418e1c1dfcac4529a6f872b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:30.914903 kubelet[2755]: I0904 16:23:30.914812 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/763247a17418e1c1dfcac4529a6f872b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"763247a17418e1c1dfcac4529a6f872b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:30.914903 kubelet[2755]: I0904 16:23:30.914856 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/763247a17418e1c1dfcac4529a6f872b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"763247a17418e1c1dfcac4529a6f872b\") " pod="kube-system/kube-apiserver-localhost" Sep 4 16:23:30.914956 kubelet[2755]: I0904 16:23:30.914922 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:30.914982 kubelet[2755]: I0904 16:23:30.914953 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:30.915047 kubelet[2755]: I0904 16:23:30.915013 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:30.915077 kubelet[2755]: I0904 16:23:30.915050 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:30.915104 kubelet[2755]: I0904 16:23:30.915085 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 16:23:30.915128 kubelet[2755]: I0904 16:23:30.915105 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 16:23:31.152411 kubelet[2755]: E0904 16:23:31.152275 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:31.152411 kubelet[2755]: E0904 16:23:31.152322 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:31.152591 kubelet[2755]: E0904 16:23:31.152466 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:31.700143 kubelet[2755]: I0904 16:23:31.700092 2755 apiserver.go:52] "Watching apiserver" Sep 4 16:23:31.714195 kubelet[2755]: I0904 16:23:31.714160 2755 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 16:23:31.735038 kubelet[2755]: I0904 16:23:31.735006 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 16:23:31.735192 kubelet[2755]: E0904 16:23:31.735147 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:31.735398 kubelet[2755]: E0904 16:23:31.735380 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:31.854307 kubelet[2755]: E0904 16:23:31.854238 2755 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 16:23:31.854527 kubelet[2755]: E0904 16:23:31.854507 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:31.882208 kubelet[2755]: I0904 16:23:31.882147 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.882121643 podStartE2EDuration="2.882121643s" podCreationTimestamp="2025-09-04 16:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:23:31.874012197 +0000 UTC m=+1.246319509" watchObservedRunningTime="2025-09-04 16:23:31.882121643 +0000 UTC m=+1.254428945" Sep 4 16:23:31.892100 kubelet[2755]: I0904 16:23:31.892022 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.892003334 podStartE2EDuration="2.892003334s" podCreationTimestamp="2025-09-04 16:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:23:31.891100306 +0000 UTC m=+1.263407608" watchObservedRunningTime="2025-09-04 16:23:31.892003334 +0000 UTC m=+1.264310636" Sep 4 16:23:31.892290 kubelet[2755]: I0904 16:23:31.892149 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.8921447000000002 podStartE2EDuration="2.8921447s" podCreationTimestamp="2025-09-04 16:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:23:31.883036797 +0000 UTC m=+1.255344099" watchObservedRunningTime="2025-09-04 16:23:31.8921447 +0000 UTC m=+1.264452002" Sep 4 16:23:32.736577 kubelet[2755]: E0904 16:23:32.736537 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:32.737039 kubelet[2755]: E0904 16:23:32.736659 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:33.299706 kubelet[2755]: E0904 16:23:33.299666 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:35.030753 kubelet[2755]: E0904 16:23:35.030709 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:36.415595 kubelet[2755]: I0904 16:23:36.415559 2755 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 16:23:36.416056 containerd[1592]: time="2025-09-04T16:23:36.415954234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 16:23:36.416286 kubelet[2755]: I0904 16:23:36.416236 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 16:23:38.435105 systemd[1]: Created slice kubepods-besteffort-pod098a5f48_86c6_4dd5_9e11_519188492c58.slice - libcontainer container kubepods-besteffort-pod098a5f48_86c6_4dd5_9e11_519188492c58.slice. Sep 4 16:23:38.501466 kubelet[2755]: I0904 16:23:38.501414 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wjlf\" (UniqueName: \"kubernetes.io/projected/098a5f48-86c6-4dd5-9e11-519188492c58-kube-api-access-8wjlf\") pod \"kube-proxy-wj64r\" (UID: \"098a5f48-86c6-4dd5-9e11-519188492c58\") " pod="kube-system/kube-proxy-wj64r" Sep 4 16:23:38.501466 kubelet[2755]: I0904 16:23:38.501452 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/098a5f48-86c6-4dd5-9e11-519188492c58-xtables-lock\") pod \"kube-proxy-wj64r\" (UID: \"098a5f48-86c6-4dd5-9e11-519188492c58\") " pod="kube-system/kube-proxy-wj64r" Sep 4 16:23:38.501466 kubelet[2755]: I0904 16:23:38.501474 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/098a5f48-86c6-4dd5-9e11-519188492c58-lib-modules\") pod \"kube-proxy-wj64r\" (UID: \"098a5f48-86c6-4dd5-9e11-519188492c58\") " pod="kube-system/kube-proxy-wj64r" Sep 4 16:23:38.502118 kubelet[2755]: I0904 16:23:38.501494 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/098a5f48-86c6-4dd5-9e11-519188492c58-kube-proxy\") pod \"kube-proxy-wj64r\" (UID: \"098a5f48-86c6-4dd5-9e11-519188492c58\") " pod="kube-system/kube-proxy-wj64r" Sep 4 16:23:38.573750 systemd[1]: Created slice kubepods-besteffort-pod1badc7cc_fb7a_4948_9bb2_0821b0dfb7a0.slice - libcontainer container kubepods-besteffort-pod1badc7cc_fb7a_4948_9bb2_0821b0dfb7a0.slice. Sep 4 16:23:38.703054 kubelet[2755]: I0904 16:23:38.702915 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1badc7cc-fb7a-4948-9bb2-0821b0dfb7a0-var-lib-calico\") pod \"tigera-operator-755d956888-d6ps6\" (UID: \"1badc7cc-fb7a-4948-9bb2-0821b0dfb7a0\") " pod="tigera-operator/tigera-operator-755d956888-d6ps6" Sep 4 16:23:38.703054 kubelet[2755]: I0904 16:23:38.702954 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rjbz\" (UniqueName: \"kubernetes.io/projected/1badc7cc-fb7a-4948-9bb2-0821b0dfb7a0-kube-api-access-8rjbz\") pod \"tigera-operator-755d956888-d6ps6\" (UID: \"1badc7cc-fb7a-4948-9bb2-0821b0dfb7a0\") " pod="tigera-operator/tigera-operator-755d956888-d6ps6" Sep 4 16:23:38.749226 kubelet[2755]: E0904 16:23:38.749199 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:38.749708 containerd[1592]: time="2025-09-04T16:23:38.749675582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj64r,Uid:098a5f48-86c6-4dd5-9e11-519188492c58,Namespace:kube-system,Attempt:0,}" Sep 4 16:23:38.775533 containerd[1592]: time="2025-09-04T16:23:38.775470039Z" level=info msg="connecting to shim efbb693bb7714e60183042634b721105e75607e4bad7ec542fdcf65f20a437e1" address="unix:///run/containerd/s/a17f280c91dc3338c03d4d065aa6408f97431ad85770b4091028ab8db0003d80" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:23:38.809086 systemd[1]: Started cri-containerd-efbb693bb7714e60183042634b721105e75607e4bad7ec542fdcf65f20a437e1.scope - libcontainer container efbb693bb7714e60183042634b721105e75607e4bad7ec542fdcf65f20a437e1. Sep 4 16:23:38.837579 containerd[1592]: time="2025-09-04T16:23:38.837543685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj64r,Uid:098a5f48-86c6-4dd5-9e11-519188492c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"efbb693bb7714e60183042634b721105e75607e4bad7ec542fdcf65f20a437e1\"" Sep 4 16:23:38.838373 kubelet[2755]: E0904 16:23:38.838351 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:38.840222 containerd[1592]: time="2025-09-04T16:23:38.840186510Z" level=info msg="CreateContainer within sandbox \"efbb693bb7714e60183042634b721105e75607e4bad7ec542fdcf65f20a437e1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 16:23:38.853921 containerd[1592]: time="2025-09-04T16:23:38.852783862Z" level=info msg="Container 2ce911e3d83c7a96c944d2a35bc6bc71ada8d69fb2102d02f748820cb428fac0: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:23:38.860875 containerd[1592]: time="2025-09-04T16:23:38.860825847Z" level=info msg="CreateContainer within sandbox \"efbb693bb7714e60183042634b721105e75607e4bad7ec542fdcf65f20a437e1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ce911e3d83c7a96c944d2a35bc6bc71ada8d69fb2102d02f748820cb428fac0\"" Sep 4 16:23:38.861383 containerd[1592]: time="2025-09-04T16:23:38.861353413Z" level=info msg="StartContainer for \"2ce911e3d83c7a96c944d2a35bc6bc71ada8d69fb2102d02f748820cb428fac0\"" Sep 4 16:23:38.862690 containerd[1592]: time="2025-09-04T16:23:38.862666754Z" level=info msg="connecting to shim 2ce911e3d83c7a96c944d2a35bc6bc71ada8d69fb2102d02f748820cb428fac0" address="unix:///run/containerd/s/a17f280c91dc3338c03d4d065aa6408f97431ad85770b4091028ab8db0003d80" protocol=ttrpc version=3 Sep 4 16:23:38.876734 containerd[1592]: time="2025-09-04T16:23:38.876696191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-d6ps6,Uid:1badc7cc-fb7a-4948-9bb2-0821b0dfb7a0,Namespace:tigera-operator,Attempt:0,}" Sep 4 16:23:38.902192 containerd[1592]: time="2025-09-04T16:23:38.902056028Z" level=info msg="connecting to shim 82a32dd1f89889aa701e4189ba4c7a1c3677268a6f5e7e3994f045f3b1e2f179" address="unix:///run/containerd/s/04359394e27e2ee453241df8f756b4c7f552d13faecb8d685733dfdfb6944277" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:23:38.918052 systemd[1]: Started cri-containerd-2ce911e3d83c7a96c944d2a35bc6bc71ada8d69fb2102d02f748820cb428fac0.scope - libcontainer container 2ce911e3d83c7a96c944d2a35bc6bc71ada8d69fb2102d02f748820cb428fac0. Sep 4 16:23:38.931110 systemd[1]: Started cri-containerd-82a32dd1f89889aa701e4189ba4c7a1c3677268a6f5e7e3994f045f3b1e2f179.scope - libcontainer container 82a32dd1f89889aa701e4189ba4c7a1c3677268a6f5e7e3994f045f3b1e2f179. Sep 4 16:23:38.969009 containerd[1592]: time="2025-09-04T16:23:38.968909763Z" level=info msg="StartContainer for \"2ce911e3d83c7a96c944d2a35bc6bc71ada8d69fb2102d02f748820cb428fac0\" returns successfully" Sep 4 16:23:38.982029 containerd[1592]: time="2025-09-04T16:23:38.981991758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-d6ps6,Uid:1badc7cc-fb7a-4948-9bb2-0821b0dfb7a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"82a32dd1f89889aa701e4189ba4c7a1c3677268a6f5e7e3994f045f3b1e2f179\"" Sep 4 16:23:38.985124 containerd[1592]: time="2025-09-04T16:23:38.985090072Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 16:23:39.750121 kubelet[2755]: E0904 16:23:39.750082 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:39.761280 kubelet[2755]: I0904 16:23:39.760972 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wj64r" podStartSLOduration=1.7609469450000002 podStartE2EDuration="1.760946945s" podCreationTimestamp="2025-09-04 16:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:23:39.760179017 +0000 UTC m=+9.132486319" watchObservedRunningTime="2025-09-04 16:23:39.760946945 +0000 UTC m=+9.133254247" Sep 4 16:23:40.039530 kubelet[2755]: E0904 16:23:40.039494 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:40.445266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158264809.mount: Deactivated successfully. Sep 4 16:23:40.751915 kubelet[2755]: E0904 16:23:40.751780 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:40.824956 containerd[1592]: time="2025-09-04T16:23:40.824893763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:40.825549 containerd[1592]: time="2025-09-04T16:23:40.825516678Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 16:23:40.826631 containerd[1592]: time="2025-09-04T16:23:40.826599754Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:40.828502 containerd[1592]: time="2025-09-04T16:23:40.828466340Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:40.829096 containerd[1592]: time="2025-09-04T16:23:40.829057668Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.843942328s" Sep 4 16:23:40.829096 containerd[1592]: time="2025-09-04T16:23:40.829092271Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 16:23:40.831529 containerd[1592]: time="2025-09-04T16:23:40.831135155Z" level=info msg="CreateContainer within sandbox \"82a32dd1f89889aa701e4189ba4c7a1c3677268a6f5e7e3994f045f3b1e2f179\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 16:23:40.840221 containerd[1592]: time="2025-09-04T16:23:40.840190464Z" level=info msg="Container b79c0137f810899ff6561cc9c69aa0e4c59538fac4acc79d403c9ecd3e992163: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:23:40.846930 containerd[1592]: time="2025-09-04T16:23:40.846900889Z" level=info msg="CreateContainer within sandbox \"82a32dd1f89889aa701e4189ba4c7a1c3677268a6f5e7e3994f045f3b1e2f179\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b79c0137f810899ff6561cc9c69aa0e4c59538fac4acc79d403c9ecd3e992163\"" Sep 4 16:23:40.848187 containerd[1592]: time="2025-09-04T16:23:40.847396813Z" level=info msg="StartContainer for \"b79c0137f810899ff6561cc9c69aa0e4c59538fac4acc79d403c9ecd3e992163\"" Sep 4 16:23:40.848424 containerd[1592]: time="2025-09-04T16:23:40.848389381Z" level=info msg="connecting to shim b79c0137f810899ff6561cc9c69aa0e4c59538fac4acc79d403c9ecd3e992163" address="unix:///run/containerd/s/04359394e27e2ee453241df8f756b4c7f552d13faecb8d685733dfdfb6944277" protocol=ttrpc version=3 Sep 4 16:23:40.905031 systemd[1]: Started cri-containerd-b79c0137f810899ff6561cc9c69aa0e4c59538fac4acc79d403c9ecd3e992163.scope - libcontainer container b79c0137f810899ff6561cc9c69aa0e4c59538fac4acc79d403c9ecd3e992163. Sep 4 16:23:40.936060 containerd[1592]: time="2025-09-04T16:23:40.936017129Z" level=info msg="StartContainer for \"b79c0137f810899ff6561cc9c69aa0e4c59538fac4acc79d403c9ecd3e992163\" returns successfully" Sep 4 16:23:40.982096 update_engine[1575]: I20250904 16:23:40.981971 1575 update_attempter.cc:509] Updating boot flags... Sep 4 16:23:41.757277 kubelet[2755]: E0904 16:23:41.757220 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:41.765096 kubelet[2755]: I0904 16:23:41.765031 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-d6ps6" podStartSLOduration=1.9187340449999999 podStartE2EDuration="3.765007128s" podCreationTimestamp="2025-09-04 16:23:38 +0000 UTC" firstStartedPulling="2025-09-04 16:23:38.983553133 +0000 UTC m=+8.355860435" lastFinishedPulling="2025-09-04 16:23:40.829826216 +0000 UTC m=+10.202133518" observedRunningTime="2025-09-04 16:23:41.764791505 +0000 UTC m=+11.137098817" watchObservedRunningTime="2025-09-04 16:23:41.765007128 +0000 UTC m=+11.137314430" Sep 4 16:23:43.303758 kubelet[2755]: E0904 16:23:43.303712 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:45.040704 kubelet[2755]: E0904 16:23:45.040661 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:45.762646 kubelet[2755]: E0904 16:23:45.762610 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:48.804652 sudo[1811]: pam_unix(sudo:session): session closed for user root Sep 4 16:23:48.808984 sshd[1810]: Connection closed by 10.0.0.1 port 41076 Sep 4 16:23:48.809465 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Sep 4 16:23:48.819287 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:41076.service: Deactivated successfully. Sep 4 16:23:48.826966 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 16:23:48.827974 systemd[1]: session-7.scope: Consumed 4.236s CPU time, 227.2M memory peak. Sep 4 16:23:48.835124 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Sep 4 16:23:48.837281 systemd-logind[1574]: Removed session 7. Sep 4 16:23:51.313957 systemd[1]: Created slice kubepods-besteffort-pod2bb2e923_8887_46f8_a5dc_b50558668c6a.slice - libcontainer container kubepods-besteffort-pod2bb2e923_8887_46f8_a5dc_b50558668c6a.slice. Sep 4 16:23:51.378899 kubelet[2755]: I0904 16:23:51.378833 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2bb2e923-8887-46f8-a5dc-b50558668c6a-typha-certs\") pod \"calico-typha-59676f5f49-x9n4n\" (UID: \"2bb2e923-8887-46f8-a5dc-b50558668c6a\") " pod="calico-system/calico-typha-59676f5f49-x9n4n" Sep 4 16:23:51.378899 kubelet[2755]: I0904 16:23:51.378895 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvslx\" (UniqueName: \"kubernetes.io/projected/2bb2e923-8887-46f8-a5dc-b50558668c6a-kube-api-access-xvslx\") pod \"calico-typha-59676f5f49-x9n4n\" (UID: \"2bb2e923-8887-46f8-a5dc-b50558668c6a\") " pod="calico-system/calico-typha-59676f5f49-x9n4n" Sep 4 16:23:51.379350 kubelet[2755]: I0904 16:23:51.378918 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bb2e923-8887-46f8-a5dc-b50558668c6a-tigera-ca-bundle\") pod \"calico-typha-59676f5f49-x9n4n\" (UID: \"2bb2e923-8887-46f8-a5dc-b50558668c6a\") " pod="calico-system/calico-typha-59676f5f49-x9n4n" Sep 4 16:23:51.618181 kubelet[2755]: E0904 16:23:51.618054 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:51.618968 containerd[1592]: time="2025-09-04T16:23:51.618919241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59676f5f49-x9n4n,Uid:2bb2e923-8887-46f8-a5dc-b50558668c6a,Namespace:calico-system,Attempt:0,}" Sep 4 16:23:51.667056 containerd[1592]: time="2025-09-04T16:23:51.666947847Z" level=info msg="connecting to shim 7f4310b362402ad369fb8db0a1c8510096522d6b0db9bd0e59d6a6af35db391f" address="unix:///run/containerd/s/a28a3ec0466ce801ae29be8f478ae73a8db31f17e6dfc1880369814d57e10f9b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:23:51.679226 systemd[1]: Created slice kubepods-besteffort-pod2e75364d_b269_4ce3_afd6_092f803f03ae.slice - libcontainer container kubepods-besteffort-pod2e75364d_b269_4ce3_afd6_092f803f03ae.slice. Sep 4 16:23:51.705013 systemd[1]: Started cri-containerd-7f4310b362402ad369fb8db0a1c8510096522d6b0db9bd0e59d6a6af35db391f.scope - libcontainer container 7f4310b362402ad369fb8db0a1c8510096522d6b0db9bd0e59d6a6af35db391f. Sep 4 16:23:51.748351 containerd[1592]: time="2025-09-04T16:23:51.748305885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59676f5f49-x9n4n,Uid:2bb2e923-8887-46f8-a5dc-b50558668c6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f4310b362402ad369fb8db0a1c8510096522d6b0db9bd0e59d6a6af35db391f\"" Sep 4 16:23:51.749270 kubelet[2755]: E0904 16:23:51.749241 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:51.750161 containerd[1592]: time="2025-09-04T16:23:51.750131319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 16:23:51.791009 kubelet[2755]: I0904 16:23:51.790972 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e75364d-b269-4ce3-afd6-092f803f03ae-tigera-ca-bundle\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791164 kubelet[2755]: I0904 16:23:51.791015 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-policysync\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791164 kubelet[2755]: I0904 16:23:51.791034 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-cni-bin-dir\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791164 kubelet[2755]: I0904 16:23:51.791051 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-cni-log-dir\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791164 kubelet[2755]: I0904 16:23:51.791069 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-xtables-lock\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791281 kubelet[2755]: I0904 16:23:51.791171 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-flexvol-driver-host\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791281 kubelet[2755]: I0904 16:23:51.791215 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-var-lib-calico\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791281 kubelet[2755]: I0904 16:23:51.791246 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-cni-net-dir\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791281 kubelet[2755]: I0904 16:23:51.791270 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2e75364d-b269-4ce3-afd6-092f803f03ae-node-certs\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791376 kubelet[2755]: I0904 16:23:51.791285 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcjmg\" (UniqueName: \"kubernetes.io/projected/2e75364d-b269-4ce3-afd6-092f803f03ae-kube-api-access-mcjmg\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791376 kubelet[2755]: I0904 16:23:51.791330 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-var-run-calico\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.791376 kubelet[2755]: I0904 16:23:51.791365 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e75364d-b269-4ce3-afd6-092f803f03ae-lib-modules\") pod \"calico-node-l6jk9\" (UID: \"2e75364d-b269-4ce3-afd6-092f803f03ae\") " pod="calico-system/calico-node-l6jk9" Sep 4 16:23:51.902657 kubelet[2755]: E0904 16:23:51.902548 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:51.902657 kubelet[2755]: W0904 16:23:51.902592 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:51.902657 kubelet[2755]: E0904 16:23:51.902650 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:51.904530 kubelet[2755]: E0904 16:23:51.904490 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:51.904530 kubelet[2755]: W0904 16:23:51.904521 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:51.904780 kubelet[2755]: E0904 16:23:51.904548 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:51.956098 kubelet[2755]: E0904 16:23:51.956033 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:23:51.983963 containerd[1592]: time="2025-09-04T16:23:51.983845335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6jk9,Uid:2e75364d-b269-4ce3-afd6-092f803f03ae,Namespace:calico-system,Attempt:0,}" Sep 4 16:23:52.007316 containerd[1592]: time="2025-09-04T16:23:52.006973909Z" level=info msg="connecting to shim 74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b" address="unix:///run/containerd/s/a0bff368dc69f2e8f9664e5b899083093014c3a43bbb46ec3f642753a54d726b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:23:52.036008 systemd[1]: Started cri-containerd-74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b.scope - libcontainer container 74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b. Sep 4 16:23:52.046892 kubelet[2755]: E0904 16:23:52.046447 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.046892 kubelet[2755]: W0904 16:23:52.046472 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.046892 kubelet[2755]: E0904 16:23:52.046518 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.046892 kubelet[2755]: E0904 16:23:52.046794 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.046892 kubelet[2755]: W0904 16:23:52.046812 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.046892 kubelet[2755]: E0904 16:23:52.046822 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.047170 kubelet[2755]: E0904 16:23:52.047072 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.047170 kubelet[2755]: W0904 16:23:52.047081 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.047170 kubelet[2755]: E0904 16:23:52.047091 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.047589 kubelet[2755]: E0904 16:23:52.047448 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.047589 kubelet[2755]: W0904 16:23:52.047462 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.047589 kubelet[2755]: E0904 16:23:52.047472 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.047777 kubelet[2755]: E0904 16:23:52.047741 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.047777 kubelet[2755]: W0904 16:23:52.047766 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.047777 kubelet[2755]: E0904 16:23:52.047776 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.048018 kubelet[2755]: E0904 16:23:52.047991 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.048018 kubelet[2755]: W0904 16:23:52.048006 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.048018 kubelet[2755]: E0904 16:23:52.048017 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.048335 kubelet[2755]: E0904 16:23:52.048317 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.048335 kubelet[2755]: W0904 16:23:52.048332 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.048411 kubelet[2755]: E0904 16:23:52.048357 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.048592 kubelet[2755]: E0904 16:23:52.048575 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.048592 kubelet[2755]: W0904 16:23:52.048587 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.048655 kubelet[2755]: E0904 16:23:52.048597 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.048838 kubelet[2755]: E0904 16:23:52.048821 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.048838 kubelet[2755]: W0904 16:23:52.048833 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.049936 kubelet[2755]: E0904 16:23:52.049906 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.050219 kubelet[2755]: E0904 16:23:52.050190 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.050219 kubelet[2755]: W0904 16:23:52.050206 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.050219 kubelet[2755]: E0904 16:23:52.050216 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.050446 kubelet[2755]: E0904 16:23:52.050428 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.050446 kubelet[2755]: W0904 16:23:52.050439 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.050446 kubelet[2755]: E0904 16:23:52.050449 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.050683 kubelet[2755]: E0904 16:23:52.050668 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.050683 kubelet[2755]: W0904 16:23:52.050679 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.050728 kubelet[2755]: E0904 16:23:52.050689 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.050917 kubelet[2755]: E0904 16:23:52.050899 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.050917 kubelet[2755]: W0904 16:23:52.050912 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.051004 kubelet[2755]: E0904 16:23:52.050922 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.051123 kubelet[2755]: E0904 16:23:52.051106 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.051123 kubelet[2755]: W0904 16:23:52.051118 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.051184 kubelet[2755]: E0904 16:23:52.051127 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.051908 kubelet[2755]: E0904 16:23:52.051412 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.051908 kubelet[2755]: W0904 16:23:52.051426 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.051908 kubelet[2755]: E0904 16:23:52.051436 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.051908 kubelet[2755]: E0904 16:23:52.051754 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.051908 kubelet[2755]: W0904 16:23:52.051763 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.051908 kubelet[2755]: E0904 16:23:52.051773 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.052094 kubelet[2755]: E0904 16:23:52.051975 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.052094 kubelet[2755]: W0904 16:23:52.051982 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.052094 kubelet[2755]: E0904 16:23:52.051992 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.052163 kubelet[2755]: E0904 16:23:52.052127 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.052163 kubelet[2755]: W0904 16:23:52.052134 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.052163 kubelet[2755]: E0904 16:23:52.052141 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.052538 kubelet[2755]: E0904 16:23:52.052273 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.052538 kubelet[2755]: W0904 16:23:52.052285 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.052538 kubelet[2755]: E0904 16:23:52.052292 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.052538 kubelet[2755]: E0904 16:23:52.052449 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.052538 kubelet[2755]: W0904 16:23:52.052456 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.052538 kubelet[2755]: E0904 16:23:52.052463 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.081292 containerd[1592]: time="2025-09-04T16:23:52.081237956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6jk9,Uid:2e75364d-b269-4ce3-afd6-092f803f03ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b\"" Sep 4 16:23:52.095362 kubelet[2755]: E0904 16:23:52.095313 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.095362 kubelet[2755]: W0904 16:23:52.095336 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.095362 kubelet[2755]: E0904 16:23:52.095362 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.095554 kubelet[2755]: I0904 16:23:52.095393 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c4deb78-6061-407f-9164-fbcdb204310d-registration-dir\") pod \"csi-node-driver-6tl25\" (UID: \"2c4deb78-6061-407f-9164-fbcdb204310d\") " pod="calico-system/csi-node-driver-6tl25" Sep 4 16:23:52.095603 kubelet[2755]: E0904 16:23:52.095587 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.095603 kubelet[2755]: W0904 16:23:52.095601 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.095663 kubelet[2755]: E0904 16:23:52.095615 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.095663 kubelet[2755]: I0904 16:23:52.095629 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2c4deb78-6061-407f-9164-fbcdb204310d-varrun\") pod \"csi-node-driver-6tl25\" (UID: \"2c4deb78-6061-407f-9164-fbcdb204310d\") " pod="calico-system/csi-node-driver-6tl25" Sep 4 16:23:52.096091 kubelet[2755]: E0904 16:23:52.095997 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.096091 kubelet[2755]: W0904 16:23:52.096052 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.096209 kubelet[2755]: E0904 16:23:52.096189 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.096353 kubelet[2755]: E0904 16:23:52.096327 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.096353 kubelet[2755]: W0904 16:23:52.096349 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.096491 kubelet[2755]: E0904 16:23:52.096384 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.096541 kubelet[2755]: I0904 16:23:52.096419 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhcjr\" (UniqueName: \"kubernetes.io/projected/2c4deb78-6061-407f-9164-fbcdb204310d-kube-api-access-jhcjr\") pod \"csi-node-driver-6tl25\" (UID: \"2c4deb78-6061-407f-9164-fbcdb204310d\") " pod="calico-system/csi-node-driver-6tl25" Sep 4 16:23:52.096706 kubelet[2755]: E0904 16:23:52.096687 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.096706 kubelet[2755]: W0904 16:23:52.096699 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.096773 kubelet[2755]: E0904 16:23:52.096735 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.097013 kubelet[2755]: E0904 16:23:52.096938 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.097013 kubelet[2755]: W0904 16:23:52.096981 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.097013 kubelet[2755]: E0904 16:23:52.096991 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.097731 kubelet[2755]: E0904 16:23:52.097713 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.097731 kubelet[2755]: W0904 16:23:52.097727 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.097799 kubelet[2755]: E0904 16:23:52.097744 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.098003 kubelet[2755]: E0904 16:23:52.097979 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.098036 kubelet[2755]: W0904 16:23:52.097994 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.098069 kubelet[2755]: E0904 16:23:52.098037 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.098069 kubelet[2755]: I0904 16:23:52.098056 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c4deb78-6061-407f-9164-fbcdb204310d-socket-dir\") pod \"csi-node-driver-6tl25\" (UID: \"2c4deb78-6061-407f-9164-fbcdb204310d\") " pod="calico-system/csi-node-driver-6tl25" Sep 4 16:23:52.098241 kubelet[2755]: E0904 16:23:52.098223 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.098241 kubelet[2755]: W0904 16:23:52.098236 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.098297 kubelet[2755]: E0904 16:23:52.098246 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.098987 kubelet[2755]: E0904 16:23:52.098966 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.098987 kubelet[2755]: W0904 16:23:52.098981 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.099059 kubelet[2755]: E0904 16:23:52.098998 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.099243 kubelet[2755]: E0904 16:23:52.099214 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.099243 kubelet[2755]: W0904 16:23:52.099228 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.099243 kubelet[2755]: E0904 16:23:52.099243 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.099555 kubelet[2755]: E0904 16:23:52.099540 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.099555 kubelet[2755]: W0904 16:23:52.099551 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.099668 kubelet[2755]: E0904 16:23:52.099570 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.099668 kubelet[2755]: I0904 16:23:52.099586 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c4deb78-6061-407f-9164-fbcdb204310d-kubelet-dir\") pod \"csi-node-driver-6tl25\" (UID: \"2c4deb78-6061-407f-9164-fbcdb204310d\") " pod="calico-system/csi-node-driver-6tl25" Sep 4 16:23:52.100020 kubelet[2755]: E0904 16:23:52.100007 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.100020 kubelet[2755]: W0904 16:23:52.100019 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.100077 kubelet[2755]: E0904 16:23:52.100029 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.100229 kubelet[2755]: E0904 16:23:52.100218 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.100229 kubelet[2755]: W0904 16:23:52.100227 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.100274 kubelet[2755]: E0904 16:23:52.100235 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.100391 kubelet[2755]: E0904 16:23:52.100380 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.100416 kubelet[2755]: W0904 16:23:52.100402 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.100416 kubelet[2755]: E0904 16:23:52.100409 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.200913 kubelet[2755]: E0904 16:23:52.200792 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.200913 kubelet[2755]: W0904 16:23:52.200810 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.200913 kubelet[2755]: E0904 16:23:52.200857 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.201075 kubelet[2755]: E0904 16:23:52.201069 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.201098 kubelet[2755]: W0904 16:23:52.201078 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.201098 kubelet[2755]: E0904 16:23:52.201094 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.201526 kubelet[2755]: E0904 16:23:52.201485 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.201526 kubelet[2755]: W0904 16:23:52.201507 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.201526 kubelet[2755]: E0904 16:23:52.201533 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.201749 kubelet[2755]: E0904 16:23:52.201718 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.201749 kubelet[2755]: W0904 16:23:52.201725 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.201749 kubelet[2755]: E0904 16:23:52.201739 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.201994 kubelet[2755]: E0904 16:23:52.201968 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.201994 kubelet[2755]: W0904 16:23:52.201988 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.202103 kubelet[2755]: E0904 16:23:52.202025 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.202283 kubelet[2755]: E0904 16:23:52.202266 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.202283 kubelet[2755]: W0904 16:23:52.202278 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.202344 kubelet[2755]: E0904 16:23:52.202292 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.202504 kubelet[2755]: E0904 16:23:52.202481 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.202504 kubelet[2755]: W0904 16:23:52.202503 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.202566 kubelet[2755]: E0904 16:23:52.202515 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.202716 kubelet[2755]: E0904 16:23:52.202694 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.202716 kubelet[2755]: W0904 16:23:52.202708 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.202786 kubelet[2755]: E0904 16:23:52.202728 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.202942 kubelet[2755]: E0904 16:23:52.202926 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.202942 kubelet[2755]: W0904 16:23:52.202938 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.203000 kubelet[2755]: E0904 16:23:52.202960 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.203265 kubelet[2755]: E0904 16:23:52.203249 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.203265 kubelet[2755]: W0904 16:23:52.203260 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.203325 kubelet[2755]: E0904 16:23:52.203274 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.203504 kubelet[2755]: E0904 16:23:52.203488 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.203504 kubelet[2755]: W0904 16:23:52.203499 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.203559 kubelet[2755]: E0904 16:23:52.203512 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.203711 kubelet[2755]: E0904 16:23:52.203694 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.203711 kubelet[2755]: W0904 16:23:52.203707 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.203752 kubelet[2755]: E0904 16:23:52.203724 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.203991 kubelet[2755]: E0904 16:23:52.203976 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.203991 kubelet[2755]: W0904 16:23:52.203987 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.204075 kubelet[2755]: E0904 16:23:52.204021 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.204206 kubelet[2755]: E0904 16:23:52.204194 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.204206 kubelet[2755]: W0904 16:23:52.204204 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.204253 kubelet[2755]: E0904 16:23:52.204234 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.204392 kubelet[2755]: E0904 16:23:52.204381 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.204392 kubelet[2755]: W0904 16:23:52.204390 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.204433 kubelet[2755]: E0904 16:23:52.204403 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.204595 kubelet[2755]: E0904 16:23:52.204584 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.204627 kubelet[2755]: W0904 16:23:52.204595 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.204627 kubelet[2755]: E0904 16:23:52.204608 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.204778 kubelet[2755]: E0904 16:23:52.204764 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.204778 kubelet[2755]: W0904 16:23:52.204772 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.204831 kubelet[2755]: E0904 16:23:52.204784 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.204979 kubelet[2755]: E0904 16:23:52.204968 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.204979 kubelet[2755]: W0904 16:23:52.204976 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.205034 kubelet[2755]: E0904 16:23:52.204990 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.205171 kubelet[2755]: E0904 16:23:52.205160 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.205171 kubelet[2755]: W0904 16:23:52.205168 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.205213 kubelet[2755]: E0904 16:23:52.205180 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.205376 kubelet[2755]: E0904 16:23:52.205360 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.205376 kubelet[2755]: W0904 16:23:52.205372 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.205422 kubelet[2755]: E0904 16:23:52.205389 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.205580 kubelet[2755]: E0904 16:23:52.205565 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.205580 kubelet[2755]: W0904 16:23:52.205575 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.205642 kubelet[2755]: E0904 16:23:52.205604 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.205753 kubelet[2755]: E0904 16:23:52.205739 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.205753 kubelet[2755]: W0904 16:23:52.205749 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.205794 kubelet[2755]: E0904 16:23:52.205762 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.205948 kubelet[2755]: E0904 16:23:52.205933 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.205948 kubelet[2755]: W0904 16:23:52.205943 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.206000 kubelet[2755]: E0904 16:23:52.205956 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.206155 kubelet[2755]: E0904 16:23:52.206143 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.206176 kubelet[2755]: W0904 16:23:52.206154 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.206176 kubelet[2755]: E0904 16:23:52.206169 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.207335 kubelet[2755]: E0904 16:23:52.207318 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.207335 kubelet[2755]: W0904 16:23:52.207330 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.207389 kubelet[2755]: E0904 16:23:52.207341 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:52.213469 kubelet[2755]: E0904 16:23:52.213437 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:52.213469 kubelet[2755]: W0904 16:23:52.213457 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:52.213469 kubelet[2755]: E0904 16:23:52.213467 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:53.719517 kubelet[2755]: E0904 16:23:53.719447 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:23:55.320385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096989085.mount: Deactivated successfully. Sep 4 16:23:55.718970 kubelet[2755]: E0904 16:23:55.718805 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:23:57.066287 containerd[1592]: time="2025-09-04T16:23:57.066231024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:57.066976 containerd[1592]: time="2025-09-04T16:23:57.066942355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 16:23:57.068112 containerd[1592]: time="2025-09-04T16:23:57.068079521Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:57.070205 containerd[1592]: time="2025-09-04T16:23:57.070153424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:23:57.070573 containerd[1592]: time="2025-09-04T16:23:57.070538921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.320373906s" Sep 4 16:23:57.070603 containerd[1592]: time="2025-09-04T16:23:57.070579974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 16:23:57.071427 containerd[1592]: time="2025-09-04T16:23:57.071394916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 16:23:57.078457 containerd[1592]: time="2025-09-04T16:23:57.078404037Z" level=info msg="CreateContainer within sandbox \"7f4310b362402ad369fb8db0a1c8510096522d6b0db9bd0e59d6a6af35db391f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 16:23:57.090256 containerd[1592]: time="2025-09-04T16:23:57.090150500Z" level=info msg="Container 5c3c25b00e604eb3d86633481be894f55ee608f264c4ba218d3b3367c26e9718: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:23:57.100630 containerd[1592]: time="2025-09-04T16:23:57.100577356Z" level=info msg="CreateContainer within sandbox \"7f4310b362402ad369fb8db0a1c8510096522d6b0db9bd0e59d6a6af35db391f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5c3c25b00e604eb3d86633481be894f55ee608f264c4ba218d3b3367c26e9718\"" Sep 4 16:23:57.101271 containerd[1592]: time="2025-09-04T16:23:57.101218372Z" level=info msg="StartContainer for \"5c3c25b00e604eb3d86633481be894f55ee608f264c4ba218d3b3367c26e9718\"" Sep 4 16:23:57.102418 containerd[1592]: time="2025-09-04T16:23:57.102393015Z" level=info msg="connecting to shim 5c3c25b00e604eb3d86633481be894f55ee608f264c4ba218d3b3367c26e9718" address="unix:///run/containerd/s/a28a3ec0466ce801ae29be8f478ae73a8db31f17e6dfc1880369814d57e10f9b" protocol=ttrpc version=3 Sep 4 16:23:57.131019 systemd[1]: Started cri-containerd-5c3c25b00e604eb3d86633481be894f55ee608f264c4ba218d3b3367c26e9718.scope - libcontainer container 5c3c25b00e604eb3d86633481be894f55ee608f264c4ba218d3b3367c26e9718. Sep 4 16:23:57.182013 containerd[1592]: time="2025-09-04T16:23:57.181962834Z" level=info msg="StartContainer for \"5c3c25b00e604eb3d86633481be894f55ee608f264c4ba218d3b3367c26e9718\" returns successfully" Sep 4 16:23:57.719617 kubelet[2755]: E0904 16:23:57.719560 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:23:57.792456 kubelet[2755]: E0904 16:23:57.792417 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:57.801244 kubelet[2755]: I0904 16:23:57.801196 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59676f5f49-x9n4n" podStartSLOduration=1.479618911 podStartE2EDuration="6.80118036s" podCreationTimestamp="2025-09-04 16:23:51 +0000 UTC" firstStartedPulling="2025-09-04 16:23:51.749740233 +0000 UTC m=+21.122047525" lastFinishedPulling="2025-09-04 16:23:57.071301672 +0000 UTC m=+26.443608974" observedRunningTime="2025-09-04 16:23:57.800667359 +0000 UTC m=+27.172974661" watchObservedRunningTime="2025-09-04 16:23:57.80118036 +0000 UTC m=+27.173487652" Sep 4 16:23:57.892944 kubelet[2755]: E0904 16:23:57.892913 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.892944 kubelet[2755]: W0904 16:23:57.892934 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.892944 kubelet[2755]: E0904 16:23:57.892957 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.893131 kubelet[2755]: E0904 16:23:57.893111 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.893131 kubelet[2755]: W0904 16:23:57.893128 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.893179 kubelet[2755]: E0904 16:23:57.893136 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.893306 kubelet[2755]: E0904 16:23:57.893281 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.893306 kubelet[2755]: W0904 16:23:57.893292 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.893306 kubelet[2755]: E0904 16:23:57.893300 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.893505 kubelet[2755]: E0904 16:23:57.893477 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.893505 kubelet[2755]: W0904 16:23:57.893505 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.893556 kubelet[2755]: E0904 16:23:57.893513 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.893661 kubelet[2755]: E0904 16:23:57.893647 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.893661 kubelet[2755]: W0904 16:23:57.893657 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.893709 kubelet[2755]: E0904 16:23:57.893664 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.893814 kubelet[2755]: E0904 16:23:57.893787 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.893814 kubelet[2755]: W0904 16:23:57.893798 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.893814 kubelet[2755]: E0904 16:23:57.893805 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.894047 kubelet[2755]: E0904 16:23:57.894018 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.894047 kubelet[2755]: W0904 16:23:57.894027 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.894047 kubelet[2755]: E0904 16:23:57.894035 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.894243 kubelet[2755]: E0904 16:23:57.894223 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.894243 kubelet[2755]: W0904 16:23:57.894232 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.894243 kubelet[2755]: E0904 16:23:57.894240 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.894410 kubelet[2755]: E0904 16:23:57.894399 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.894410 kubelet[2755]: W0904 16:23:57.894407 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.894460 kubelet[2755]: E0904 16:23:57.894414 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.894583 kubelet[2755]: E0904 16:23:57.894572 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.894583 kubelet[2755]: W0904 16:23:57.894580 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.894629 kubelet[2755]: E0904 16:23:57.894587 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.894743 kubelet[2755]: E0904 16:23:57.894732 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.894743 kubelet[2755]: W0904 16:23:57.894741 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.894793 kubelet[2755]: E0904 16:23:57.894747 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.894926 kubelet[2755]: E0904 16:23:57.894908 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.894926 kubelet[2755]: W0904 16:23:57.894918 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.894926 kubelet[2755]: E0904 16:23:57.894925 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.895073 kubelet[2755]: E0904 16:23:57.895058 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.895073 kubelet[2755]: W0904 16:23:57.895068 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.895153 kubelet[2755]: E0904 16:23:57.895076 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.895238 kubelet[2755]: E0904 16:23:57.895224 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.895238 kubelet[2755]: W0904 16:23:57.895234 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.895284 kubelet[2755]: E0904 16:23:57.895240 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.895394 kubelet[2755]: E0904 16:23:57.895381 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.895394 kubelet[2755]: W0904 16:23:57.895390 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.895447 kubelet[2755]: E0904 16:23:57.895398 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.942583 kubelet[2755]: E0904 16:23:57.942552 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.942583 kubelet[2755]: W0904 16:23:57.942574 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.942696 kubelet[2755]: E0904 16:23:57.942596 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.942812 kubelet[2755]: E0904 16:23:57.942780 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.942812 kubelet[2755]: W0904 16:23:57.942790 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.942917 kubelet[2755]: E0904 16:23:57.942900 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.943083 kubelet[2755]: E0904 16:23:57.943054 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.943083 kubelet[2755]: W0904 16:23:57.943071 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.943083 kubelet[2755]: E0904 16:23:57.943092 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.943419 kubelet[2755]: E0904 16:23:57.943386 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.943419 kubelet[2755]: W0904 16:23:57.943414 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.943546 kubelet[2755]: E0904 16:23:57.943452 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.943847 kubelet[2755]: E0904 16:23:57.943823 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.943847 kubelet[2755]: W0904 16:23:57.943836 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.943847 kubelet[2755]: E0904 16:23:57.943852 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.944122 kubelet[2755]: E0904 16:23:57.944092 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.944184 kubelet[2755]: W0904 16:23:57.944126 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.944184 kubelet[2755]: E0904 16:23:57.944142 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.944339 kubelet[2755]: E0904 16:23:57.944317 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.944339 kubelet[2755]: W0904 16:23:57.944327 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.944389 kubelet[2755]: E0904 16:23:57.944355 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.944491 kubelet[2755]: E0904 16:23:57.944469 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.944491 kubelet[2755]: W0904 16:23:57.944478 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.944544 kubelet[2755]: E0904 16:23:57.944509 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.944635 kubelet[2755]: E0904 16:23:57.944622 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.944635 kubelet[2755]: W0904 16:23:57.944631 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.944679 kubelet[2755]: E0904 16:23:57.944643 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.944806 kubelet[2755]: E0904 16:23:57.944791 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.944806 kubelet[2755]: W0904 16:23:57.944802 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.944855 kubelet[2755]: E0904 16:23:57.944817 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.944985 kubelet[2755]: E0904 16:23:57.944971 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.944985 kubelet[2755]: W0904 16:23:57.944981 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.945032 kubelet[2755]: E0904 16:23:57.944991 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.945176 kubelet[2755]: E0904 16:23:57.945162 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.945176 kubelet[2755]: W0904 16:23:57.945173 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.945232 kubelet[2755]: E0904 16:23:57.945185 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.945413 kubelet[2755]: E0904 16:23:57.945397 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.945413 kubelet[2755]: W0904 16:23:57.945408 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.945474 kubelet[2755]: E0904 16:23:57.945419 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.945588 kubelet[2755]: E0904 16:23:57.945574 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.945588 kubelet[2755]: W0904 16:23:57.945584 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.945638 kubelet[2755]: E0904 16:23:57.945595 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.945820 kubelet[2755]: E0904 16:23:57.945802 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.945820 kubelet[2755]: W0904 16:23:57.945817 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.945882 kubelet[2755]: E0904 16:23:57.945834 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.946031 kubelet[2755]: E0904 16:23:57.946015 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.946031 kubelet[2755]: W0904 16:23:57.946027 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.946088 kubelet[2755]: E0904 16:23:57.946042 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.946200 kubelet[2755]: E0904 16:23:57.946186 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.946200 kubelet[2755]: W0904 16:23:57.946196 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.946243 kubelet[2755]: E0904 16:23:57.946203 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:57.947355 kubelet[2755]: E0904 16:23:57.947331 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:57.947355 kubelet[2755]: W0904 16:23:57.947344 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:57.947355 kubelet[2755]: E0904 16:23:57.947353 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.791264 kubelet[2755]: I0904 16:23:58.791222 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:23:58.791790 kubelet[2755]: E0904 16:23:58.791572 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:23:58.800579 kubelet[2755]: E0904 16:23:58.800549 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.800579 kubelet[2755]: W0904 16:23:58.800563 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.800579 kubelet[2755]: E0904 16:23:58.800578 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.800845 kubelet[2755]: E0904 16:23:58.800743 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.800845 kubelet[2755]: W0904 16:23:58.800753 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.800845 kubelet[2755]: E0904 16:23:58.800763 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.800964 kubelet[2755]: E0904 16:23:58.800949 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.800964 kubelet[2755]: W0904 16:23:58.800960 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.801031 kubelet[2755]: E0904 16:23:58.800973 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.801252 kubelet[2755]: E0904 16:23:58.801225 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.801252 kubelet[2755]: W0904 16:23:58.801238 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.801252 kubelet[2755]: E0904 16:23:58.801248 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.801448 kubelet[2755]: E0904 16:23:58.801433 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.801448 kubelet[2755]: W0904 16:23:58.801445 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.801509 kubelet[2755]: E0904 16:23:58.801453 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.801616 kubelet[2755]: E0904 16:23:58.801601 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.801616 kubelet[2755]: W0904 16:23:58.801611 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.801663 kubelet[2755]: E0904 16:23:58.801619 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.801795 kubelet[2755]: E0904 16:23:58.801768 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.801795 kubelet[2755]: W0904 16:23:58.801784 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.801795 kubelet[2755]: E0904 16:23:58.801791 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.801991 kubelet[2755]: E0904 16:23:58.801974 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.801991 kubelet[2755]: W0904 16:23:58.801984 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.802043 kubelet[2755]: E0904 16:23:58.801993 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.802182 kubelet[2755]: E0904 16:23:58.802165 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.802182 kubelet[2755]: W0904 16:23:58.802177 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.802230 kubelet[2755]: E0904 16:23:58.802186 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.802372 kubelet[2755]: E0904 16:23:58.802348 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.802372 kubelet[2755]: W0904 16:23:58.802358 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.802415 kubelet[2755]: E0904 16:23:58.802376 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.802542 kubelet[2755]: E0904 16:23:58.802526 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.802542 kubelet[2755]: W0904 16:23:58.802536 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.802589 kubelet[2755]: E0904 16:23:58.802543 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.802710 kubelet[2755]: E0904 16:23:58.802694 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.802710 kubelet[2755]: W0904 16:23:58.802703 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.802710 kubelet[2755]: E0904 16:23:58.802710 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.802897 kubelet[2755]: E0904 16:23:58.802878 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.802897 kubelet[2755]: W0904 16:23:58.802891 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.802953 kubelet[2755]: E0904 16:23:58.802900 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.803080 kubelet[2755]: E0904 16:23:58.803063 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.803080 kubelet[2755]: W0904 16:23:58.803073 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.803241 kubelet[2755]: E0904 16:23:58.803080 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.803241 kubelet[2755]: E0904 16:23:58.803229 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.803241 kubelet[2755]: W0904 16:23:58.803236 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.803241 kubelet[2755]: E0904 16:23:58.803243 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.849033 kubelet[2755]: E0904 16:23:58.849001 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.849033 kubelet[2755]: W0904 16:23:58.849014 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.849033 kubelet[2755]: E0904 16:23:58.849026 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.849279 kubelet[2755]: E0904 16:23:58.849245 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.849279 kubelet[2755]: W0904 16:23:58.849258 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.849279 kubelet[2755]: E0904 16:23:58.849274 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.849546 kubelet[2755]: E0904 16:23:58.849520 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.849546 kubelet[2755]: W0904 16:23:58.849535 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.849602 kubelet[2755]: E0904 16:23:58.849551 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.849771 kubelet[2755]: E0904 16:23:58.849754 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.849771 kubelet[2755]: W0904 16:23:58.849769 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.849828 kubelet[2755]: E0904 16:23:58.849786 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.849992 kubelet[2755]: E0904 16:23:58.849973 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.849992 kubelet[2755]: W0904 16:23:58.849986 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.850049 kubelet[2755]: E0904 16:23:58.850002 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.850254 kubelet[2755]: E0904 16:23:58.850233 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.850288 kubelet[2755]: W0904 16:23:58.850260 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.850315 kubelet[2755]: E0904 16:23:58.850290 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.850578 kubelet[2755]: E0904 16:23:58.850560 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.850578 kubelet[2755]: W0904 16:23:58.850575 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.850635 kubelet[2755]: E0904 16:23:58.850593 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.850821 kubelet[2755]: E0904 16:23:58.850808 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.850853 kubelet[2755]: W0904 16:23:58.850820 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.850853 kubelet[2755]: E0904 16:23:58.850837 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.851084 kubelet[2755]: E0904 16:23:58.851065 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.851084 kubelet[2755]: W0904 16:23:58.851079 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.851126 kubelet[2755]: E0904 16:23:58.851092 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.851270 kubelet[2755]: E0904 16:23:58.851257 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.851291 kubelet[2755]: W0904 16:23:58.851268 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.851291 kubelet[2755]: E0904 16:23:58.851285 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.851481 kubelet[2755]: E0904 16:23:58.851470 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.851481 kubelet[2755]: W0904 16:23:58.851479 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.851531 kubelet[2755]: E0904 16:23:58.851492 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.851694 kubelet[2755]: E0904 16:23:58.851683 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.851694 kubelet[2755]: W0904 16:23:58.851693 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.851752 kubelet[2755]: E0904 16:23:58.851707 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.851932 kubelet[2755]: E0904 16:23:58.851921 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.851932 kubelet[2755]: W0904 16:23:58.851930 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.851978 kubelet[2755]: E0904 16:23:58.851943 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.852214 kubelet[2755]: E0904 16:23:58.852188 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.852214 kubelet[2755]: W0904 16:23:58.852203 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.852267 kubelet[2755]: E0904 16:23:58.852220 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.852416 kubelet[2755]: E0904 16:23:58.852400 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.852416 kubelet[2755]: W0904 16:23:58.852412 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.852458 kubelet[2755]: E0904 16:23:58.852427 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.852658 kubelet[2755]: E0904 16:23:58.852642 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.852658 kubelet[2755]: W0904 16:23:58.852655 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.852720 kubelet[2755]: E0904 16:23:58.852671 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.852930 kubelet[2755]: E0904 16:23:58.852910 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.852930 kubelet[2755]: W0904 16:23:58.852927 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.852981 kubelet[2755]: E0904 16:23:58.852942 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:58.853196 kubelet[2755]: E0904 16:23:58.853169 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 16:23:58.853196 kubelet[2755]: W0904 16:23:58.853182 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 16:23:58.853196 kubelet[2755]: E0904 16:23:58.853193 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 16:23:59.719816 kubelet[2755]: E0904 16:23:59.719746 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:24:00.273406 containerd[1592]: time="2025-09-04T16:24:00.273360194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:00.274190 containerd[1592]: time="2025-09-04T16:24:00.274151319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 16:24:00.275274 containerd[1592]: time="2025-09-04T16:24:00.275244633Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:00.277140 containerd[1592]: time="2025-09-04T16:24:00.277111511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:00.277610 containerd[1592]: time="2025-09-04T16:24:00.277555841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 3.206128589s" Sep 4 16:24:00.277682 containerd[1592]: time="2025-09-04T16:24:00.277605861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 16:24:00.279361 containerd[1592]: time="2025-09-04T16:24:00.279335515Z" level=info msg="CreateContainer within sandbox \"74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 16:24:00.288466 containerd[1592]: time="2025-09-04T16:24:00.288435409Z" level=info msg="Container 41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:00.297465 containerd[1592]: time="2025-09-04T16:24:00.297435627Z" level=info msg="CreateContainer within sandbox \"74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62\"" Sep 4 16:24:00.297858 containerd[1592]: time="2025-09-04T16:24:00.297830930Z" level=info msg="StartContainer for \"41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62\"" Sep 4 16:24:00.299089 containerd[1592]: time="2025-09-04T16:24:00.299062579Z" level=info msg="connecting to shim 41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62" address="unix:///run/containerd/s/a0bff368dc69f2e8f9664e5b899083093014c3a43bbb46ec3f642753a54d726b" protocol=ttrpc version=3 Sep 4 16:24:00.320989 systemd[1]: Started cri-containerd-41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62.scope - libcontainer container 41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62. Sep 4 16:24:00.371628 systemd[1]: cri-containerd-41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62.scope: Deactivated successfully. Sep 4 16:24:00.373251 containerd[1592]: time="2025-09-04T16:24:00.373201178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62\" id:\"41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62\" pid:3485 exited_at:{seconds:1757003040 nanos:372713942}" Sep 4 16:24:00.390636 containerd[1592]: time="2025-09-04T16:24:00.390594954Z" level=info msg="received exit event container_id:\"41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62\" id:\"41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62\" pid:3485 exited_at:{seconds:1757003040 nanos:372713942}" Sep 4 16:24:00.392284 containerd[1592]: time="2025-09-04T16:24:00.392261166Z" level=info msg="StartContainer for \"41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62\" returns successfully" Sep 4 16:24:00.413529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41baba90c71ff236e50a4c7f0fbf4d0216b8d463ce8695b50db88527ede63d62-rootfs.mount: Deactivated successfully. Sep 4 16:24:00.798623 containerd[1592]: time="2025-09-04T16:24:00.798157492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 16:24:01.719241 kubelet[2755]: E0904 16:24:01.719190 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:24:03.718994 kubelet[2755]: E0904 16:24:03.718923 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:24:05.719687 kubelet[2755]: E0904 16:24:05.719619 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:24:07.719829 kubelet[2755]: E0904 16:24:07.719743 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:24:09.099468 kubelet[2755]: I0904 16:24:09.099328 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:24:09.101305 kubelet[2755]: E0904 16:24:09.100236 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:09.719325 kubelet[2755]: E0904 16:24:09.719267 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:24:09.720075 containerd[1592]: time="2025-09-04T16:24:09.720028763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:09.720893 containerd[1592]: time="2025-09-04T16:24:09.720841672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 16:24:09.722084 containerd[1592]: time="2025-09-04T16:24:09.722057686Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:09.724045 containerd[1592]: time="2025-09-04T16:24:09.724023055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:09.724557 containerd[1592]: time="2025-09-04T16:24:09.724534045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 8.926338644s" Sep 4 16:24:09.724557 containerd[1592]: time="2025-09-04T16:24:09.724557146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 16:24:09.726391 containerd[1592]: time="2025-09-04T16:24:09.726344871Z" level=info msg="CreateContainer within sandbox \"74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 16:24:09.734912 containerd[1592]: time="2025-09-04T16:24:09.734883547Z" level=info msg="Container e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:09.743667 containerd[1592]: time="2025-09-04T16:24:09.743631033Z" level=info msg="CreateContainer within sandbox \"74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885\"" Sep 4 16:24:09.744025 containerd[1592]: time="2025-09-04T16:24:09.743991138Z" level=info msg="StartContainer for \"e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885\"" Sep 4 16:24:09.745362 containerd[1592]: time="2025-09-04T16:24:09.745337309Z" level=info msg="connecting to shim e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885" address="unix:///run/containerd/s/a0bff368dc69f2e8f9664e5b899083093014c3a43bbb46ec3f642753a54d726b" protocol=ttrpc version=3 Sep 4 16:24:09.774991 systemd[1]: Started cri-containerd-e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885.scope - libcontainer container e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885. Sep 4 16:24:09.817416 kubelet[2755]: E0904 16:24:09.817380 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:09.819689 containerd[1592]: time="2025-09-04T16:24:09.819637861Z" level=info msg="StartContainer for \"e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885\" returns successfully" Sep 4 16:24:10.820098 systemd[1]: cri-containerd-e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885.scope: Deactivated successfully. Sep 4 16:24:10.820443 systemd[1]: cri-containerd-e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885.scope: Consumed 572ms CPU time, 176.4M memory peak, 3.7M read from disk, 171.3M written to disk. Sep 4 16:24:10.823617 containerd[1592]: time="2025-09-04T16:24:10.823455261Z" level=info msg="received exit event container_id:\"e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885\" id:\"e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885\" pid:3549 exited_at:{seconds:1757003050 nanos:823272187}" Sep 4 16:24:10.823996 containerd[1592]: time="2025-09-04T16:24:10.823686203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885\" id:\"e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885\" pid:3549 exited_at:{seconds:1757003050 nanos:823272187}" Sep 4 16:24:10.839980 kubelet[2755]: I0904 16:24:10.839930 2755 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 16:24:10.849189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e15301d049d7472d908d12ded2ed9609bb1ea825e10e00fa40c337006cae7885-rootfs.mount: Deactivated successfully. Sep 4 16:24:10.876544 systemd[1]: Created slice kubepods-besteffort-podf11388c6_8919_49a2_ae5a_0eb9194c9913.slice - libcontainer container kubepods-besteffort-podf11388c6_8919_49a2_ae5a_0eb9194c9913.slice. Sep 4 16:24:10.887309 systemd[1]: Created slice kubepods-besteffort-podf2d5c1af_92be_4caf_867d_5ba5192435a8.slice - libcontainer container kubepods-besteffort-podf2d5c1af_92be_4caf_867d_5ba5192435a8.slice. Sep 4 16:24:10.904628 systemd[1]: Created slice kubepods-besteffort-pod31770e90_6b62_4986_968c_c7d212e719b2.slice - libcontainer container kubepods-besteffort-pod31770e90_6b62_4986_968c_c7d212e719b2.slice. Sep 4 16:24:10.910338 systemd[1]: Created slice kubepods-besteffort-pod20cf1824_6aae_436f_ab1d_cbc0a9a52490.slice - libcontainer container kubepods-besteffort-pod20cf1824_6aae_436f_ab1d_cbc0a9a52490.slice. Sep 4 16:24:10.915580 systemd[1]: Created slice kubepods-besteffort-pod73e7c9c3_6db9_4664_8551_5fd14d249fee.slice - libcontainer container kubepods-besteffort-pod73e7c9c3_6db9_4664_8551_5fd14d249fee.slice. Sep 4 16:24:10.920952 systemd[1]: Created slice kubepods-burstable-podb50476ad_7347_42a4_a89a_beeb6add2c7c.slice - libcontainer container kubepods-burstable-podb50476ad_7347_42a4_a89a_beeb6add2c7c.slice. Sep 4 16:24:10.925005 kubelet[2755]: I0904 16:24:10.924962 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f11388c6-8919-49a2-ae5a-0eb9194c9913-tigera-ca-bundle\") pod \"calico-kube-controllers-7b587f946-s494h\" (UID: \"f11388c6-8919-49a2-ae5a-0eb9194c9913\") " pod="calico-system/calico-kube-controllers-7b587f946-s494h" Sep 4 16:24:10.925005 kubelet[2755]: I0904 16:24:10.925006 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmkz\" (UniqueName: \"kubernetes.io/projected/f2d5c1af-92be-4caf-867d-5ba5192435a8-kube-api-access-xwmkz\") pod \"whisker-8497cdd49f-vdlrl\" (UID: \"f2d5c1af-92be-4caf-867d-5ba5192435a8\") " pod="calico-system/whisker-8497cdd49f-vdlrl" Sep 4 16:24:10.925174 kubelet[2755]: I0904 16:24:10.925052 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-backend-key-pair\") pod \"whisker-8497cdd49f-vdlrl\" (UID: \"f2d5c1af-92be-4caf-867d-5ba5192435a8\") " pod="calico-system/whisker-8497cdd49f-vdlrl" Sep 4 16:24:10.925174 kubelet[2755]: I0904 16:24:10.925079 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-ca-bundle\") pod \"whisker-8497cdd49f-vdlrl\" (UID: \"f2d5c1af-92be-4caf-867d-5ba5192435a8\") " pod="calico-system/whisker-8497cdd49f-vdlrl" Sep 4 16:24:10.925174 kubelet[2755]: I0904 16:24:10.925094 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7m54\" (UniqueName: \"kubernetes.io/projected/f11388c6-8919-49a2-ae5a-0eb9194c9913-kube-api-access-t7m54\") pod \"calico-kube-controllers-7b587f946-s494h\" (UID: \"f11388c6-8919-49a2-ae5a-0eb9194c9913\") " pod="calico-system/calico-kube-controllers-7b587f946-s494h" Sep 4 16:24:10.928042 systemd[1]: Created slice kubepods-burstable-pod85dfe534_9386_4bdb_9f8d_60c5f85bac15.slice - libcontainer container kubepods-burstable-pod85dfe534_9386_4bdb_9f8d_60c5f85bac15.slice. Sep 4 16:24:11.026309 kubelet[2755]: I0904 16:24:11.025851 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73e7c9c3-6db9-4664-8551-5fd14d249fee-calico-apiserver-certs\") pod \"calico-apiserver-8495f75bd6-58hqg\" (UID: \"73e7c9c3-6db9-4664-8551-5fd14d249fee\") " pod="calico-apiserver/calico-apiserver-8495f75bd6-58hqg" Sep 4 16:24:11.026309 kubelet[2755]: I0904 16:24:11.025927 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/31770e90-6b62-4986-968c-c7d212e719b2-goldmane-key-pair\") pod \"goldmane-54d579b49d-rcsbj\" (UID: \"31770e90-6b62-4986-968c-c7d212e719b2\") " pod="calico-system/goldmane-54d579b49d-rcsbj" Sep 4 16:24:11.026309 kubelet[2755]: I0904 16:24:11.025943 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9bk\" (UniqueName: \"kubernetes.io/projected/31770e90-6b62-4986-968c-c7d212e719b2-kube-api-access-xc9bk\") pod \"goldmane-54d579b49d-rcsbj\" (UID: \"31770e90-6b62-4986-968c-c7d212e719b2\") " pod="calico-system/goldmane-54d579b49d-rcsbj" Sep 4 16:24:11.026309 kubelet[2755]: I0904 16:24:11.025957 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqx4c\" (UniqueName: \"kubernetes.io/projected/73e7c9c3-6db9-4664-8551-5fd14d249fee-kube-api-access-qqx4c\") pod \"calico-apiserver-8495f75bd6-58hqg\" (UID: \"73e7c9c3-6db9-4664-8551-5fd14d249fee\") " pod="calico-apiserver/calico-apiserver-8495f75bd6-58hqg" Sep 4 16:24:11.026309 kubelet[2755]: I0904 16:24:11.025972 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85dfe534-9386-4bdb-9f8d-60c5f85bac15-config-volume\") pod \"coredns-668d6bf9bc-96j5s\" (UID: \"85dfe534-9386-4bdb-9f8d-60c5f85bac15\") " pod="kube-system/coredns-668d6bf9bc-96j5s" Sep 4 16:24:11.026846 kubelet[2755]: I0904 16:24:11.025989 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwg2\" (UniqueName: \"kubernetes.io/projected/20cf1824-6aae-436f-ab1d-cbc0a9a52490-kube-api-access-5xwg2\") pod \"calico-apiserver-8495f75bd6-tmbsc\" (UID: \"20cf1824-6aae-436f-ab1d-cbc0a9a52490\") " pod="calico-apiserver/calico-apiserver-8495f75bd6-tmbsc" Sep 4 16:24:11.026846 kubelet[2755]: I0904 16:24:11.026005 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31770e90-6b62-4986-968c-c7d212e719b2-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-rcsbj\" (UID: \"31770e90-6b62-4986-968c-c7d212e719b2\") " pod="calico-system/goldmane-54d579b49d-rcsbj" Sep 4 16:24:11.026846 kubelet[2755]: I0904 16:24:11.026019 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cccs\" (UniqueName: \"kubernetes.io/projected/b50476ad-7347-42a4-a89a-beeb6add2c7c-kube-api-access-5cccs\") pod \"coredns-668d6bf9bc-6rsww\" (UID: \"b50476ad-7347-42a4-a89a-beeb6add2c7c\") " pod="kube-system/coredns-668d6bf9bc-6rsww" Sep 4 16:24:11.026846 kubelet[2755]: I0904 16:24:11.026051 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhkm\" (UniqueName: \"kubernetes.io/projected/85dfe534-9386-4bdb-9f8d-60c5f85bac15-kube-api-access-pbhkm\") pod \"coredns-668d6bf9bc-96j5s\" (UID: \"85dfe534-9386-4bdb-9f8d-60c5f85bac15\") " pod="kube-system/coredns-668d6bf9bc-96j5s" Sep 4 16:24:11.026846 kubelet[2755]: I0904 16:24:11.026079 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31770e90-6b62-4986-968c-c7d212e719b2-config\") pod \"goldmane-54d579b49d-rcsbj\" (UID: \"31770e90-6b62-4986-968c-c7d212e719b2\") " pod="calico-system/goldmane-54d579b49d-rcsbj" Sep 4 16:24:11.027056 kubelet[2755]: I0904 16:24:11.026110 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20cf1824-6aae-436f-ab1d-cbc0a9a52490-calico-apiserver-certs\") pod \"calico-apiserver-8495f75bd6-tmbsc\" (UID: \"20cf1824-6aae-436f-ab1d-cbc0a9a52490\") " pod="calico-apiserver/calico-apiserver-8495f75bd6-tmbsc" Sep 4 16:24:11.027056 kubelet[2755]: I0904 16:24:11.026137 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b50476ad-7347-42a4-a89a-beeb6add2c7c-config-volume\") pod \"coredns-668d6bf9bc-6rsww\" (UID: \"b50476ad-7347-42a4-a89a-beeb6add2c7c\") " pod="kube-system/coredns-668d6bf9bc-6rsww" Sep 4 16:24:11.189658 containerd[1592]: time="2025-09-04T16:24:11.189527811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b587f946-s494h,Uid:f11388c6-8919-49a2-ae5a-0eb9194c9913,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:11.197110 containerd[1592]: time="2025-09-04T16:24:11.197056116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8497cdd49f-vdlrl,Uid:f2d5c1af-92be-4caf-867d-5ba5192435a8,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:11.208651 containerd[1592]: time="2025-09-04T16:24:11.208616350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rcsbj,Uid:31770e90-6b62-4986-968c-c7d212e719b2,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:11.213623 containerd[1592]: time="2025-09-04T16:24:11.213582054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-tmbsc,Uid:20cf1824-6aae-436f-ab1d-cbc0a9a52490,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:24:11.218268 containerd[1592]: time="2025-09-04T16:24:11.218228284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-58hqg,Uid:73e7c9c3-6db9-4664-8551-5fd14d249fee,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:24:11.224711 kubelet[2755]: E0904 16:24:11.224633 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:11.225353 containerd[1592]: time="2025-09-04T16:24:11.225312638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rsww,Uid:b50476ad-7347-42a4-a89a-beeb6add2c7c,Namespace:kube-system,Attempt:0,}" Sep 4 16:24:11.230395 kubelet[2755]: E0904 16:24:11.230313 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:11.231100 containerd[1592]: time="2025-09-04T16:24:11.231055511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96j5s,Uid:85dfe534-9386-4bdb-9f8d-60c5f85bac15,Namespace:kube-system,Attempt:0,}" Sep 4 16:24:11.259625 containerd[1592]: time="2025-09-04T16:24:11.259566607Z" level=error msg="Failed to destroy network for sandbox \"5bdb8e99faa27a2e70ff3436ccd5dc167c63598e7ebe790975b348e8ceb02155\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.262848 containerd[1592]: time="2025-09-04T16:24:11.262785359Z" level=error msg="Failed to destroy network for sandbox \"c59420c903181853f8ef675b975d43497d3e48bb69c8658ef70f65655ea80f48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.495230 containerd[1592]: time="2025-09-04T16:24:11.495098047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8497cdd49f-vdlrl,Uid:f2d5c1af-92be-4caf-867d-5ba5192435a8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdb8e99faa27a2e70ff3436ccd5dc167c63598e7ebe790975b348e8ceb02155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.495396 kubelet[2755]: E0904 16:24:11.495331 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdb8e99faa27a2e70ff3436ccd5dc167c63598e7ebe790975b348e8ceb02155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.495448 kubelet[2755]: E0904 16:24:11.495419 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdb8e99faa27a2e70ff3436ccd5dc167c63598e7ebe790975b348e8ceb02155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8497cdd49f-vdlrl" Sep 4 16:24:11.495486 kubelet[2755]: E0904 16:24:11.495452 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdb8e99faa27a2e70ff3436ccd5dc167c63598e7ebe790975b348e8ceb02155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8497cdd49f-vdlrl" Sep 4 16:24:11.495891 kubelet[2755]: E0904 16:24:11.495505 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8497cdd49f-vdlrl_calico-system(f2d5c1af-92be-4caf-867d-5ba5192435a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8497cdd49f-vdlrl_calico-system(f2d5c1af-92be-4caf-867d-5ba5192435a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bdb8e99faa27a2e70ff3436ccd5dc167c63598e7ebe790975b348e8ceb02155\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8497cdd49f-vdlrl" podUID="f2d5c1af-92be-4caf-867d-5ba5192435a8" Sep 4 16:24:11.496315 containerd[1592]: time="2025-09-04T16:24:11.496275028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b587f946-s494h,Uid:f11388c6-8919-49a2-ae5a-0eb9194c9913,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c59420c903181853f8ef675b975d43497d3e48bb69c8658ef70f65655ea80f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.496433 kubelet[2755]: E0904 16:24:11.496405 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c59420c903181853f8ef675b975d43497d3e48bb69c8658ef70f65655ea80f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.496480 kubelet[2755]: E0904 16:24:11.496440 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c59420c903181853f8ef675b975d43497d3e48bb69c8658ef70f65655ea80f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b587f946-s494h" Sep 4 16:24:11.496480 kubelet[2755]: E0904 16:24:11.496456 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c59420c903181853f8ef675b975d43497d3e48bb69c8658ef70f65655ea80f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b587f946-s494h" Sep 4 16:24:11.496532 kubelet[2755]: E0904 16:24:11.496481 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b587f946-s494h_calico-system(f11388c6-8919-49a2-ae5a-0eb9194c9913)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b587f946-s494h_calico-system(f11388c6-8919-49a2-ae5a-0eb9194c9913)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c59420c903181853f8ef675b975d43497d3e48bb69c8658ef70f65655ea80f48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b587f946-s494h" podUID="f11388c6-8919-49a2-ae5a-0eb9194c9913" Sep 4 16:24:11.560915 containerd[1592]: time="2025-09-04T16:24:11.560360226Z" level=error msg="Failed to destroy network for sandbox \"3bfc48153b6783c009666249f316453dda4de5ed43d1f8d756a0aab309ffb4fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.562969 containerd[1592]: time="2025-09-04T16:24:11.562853592Z" level=error msg="Failed to destroy network for sandbox \"4322d59685d2926c76fc53a1a5168601d7b285f9af8ca5221e012677d974d3cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.564678 containerd[1592]: time="2025-09-04T16:24:11.564638343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rcsbj,Uid:31770e90-6b62-4986-968c-c7d212e719b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bfc48153b6783c009666249f316453dda4de5ed43d1f8d756a0aab309ffb4fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.565260 kubelet[2755]: E0904 16:24:11.565204 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bfc48153b6783c009666249f316453dda4de5ed43d1f8d756a0aab309ffb4fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.565331 kubelet[2755]: E0904 16:24:11.565290 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bfc48153b6783c009666249f316453dda4de5ed43d1f8d756a0aab309ffb4fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rcsbj" Sep 4 16:24:11.565360 kubelet[2755]: E0904 16:24:11.565326 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bfc48153b6783c009666249f316453dda4de5ed43d1f8d756a0aab309ffb4fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rcsbj" Sep 4 16:24:11.565410 kubelet[2755]: E0904 16:24:11.565379 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-rcsbj_calico-system(31770e90-6b62-4986-968c-c7d212e719b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-rcsbj_calico-system(31770e90-6b62-4986-968c-c7d212e719b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bfc48153b6783c009666249f316453dda4de5ed43d1f8d756a0aab309ffb4fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-rcsbj" podUID="31770e90-6b62-4986-968c-c7d212e719b2" Sep 4 16:24:11.565508 containerd[1592]: time="2025-09-04T16:24:11.565483287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-58hqg,Uid:73e7c9c3-6db9-4664-8551-5fd14d249fee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4322d59685d2926c76fc53a1a5168601d7b285f9af8ca5221e012677d974d3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.565800 kubelet[2755]: E0904 16:24:11.565694 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4322d59685d2926c76fc53a1a5168601d7b285f9af8ca5221e012677d974d3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.565800 kubelet[2755]: E0904 16:24:11.565722 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4322d59685d2926c76fc53a1a5168601d7b285f9af8ca5221e012677d974d3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495f75bd6-58hqg" Sep 4 16:24:11.565800 kubelet[2755]: E0904 16:24:11.565746 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4322d59685d2926c76fc53a1a5168601d7b285f9af8ca5221e012677d974d3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495f75bd6-58hqg" Sep 4 16:24:11.565907 kubelet[2755]: E0904 16:24:11.565775 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8495f75bd6-58hqg_calico-apiserver(73e7c9c3-6db9-4664-8551-5fd14d249fee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8495f75bd6-58hqg_calico-apiserver(73e7c9c3-6db9-4664-8551-5fd14d249fee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4322d59685d2926c76fc53a1a5168601d7b285f9af8ca5221e012677d974d3cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8495f75bd6-58hqg" podUID="73e7c9c3-6db9-4664-8551-5fd14d249fee" Sep 4 16:24:11.578274 containerd[1592]: time="2025-09-04T16:24:11.578226171Z" level=error msg="Failed to destroy network for sandbox \"cad0196ec70ac5fb2dbee95bbec6a35b3fc1015ffe70055ff537460cefe09c3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.579613 containerd[1592]: time="2025-09-04T16:24:11.579587278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-tmbsc,Uid:20cf1824-6aae-436f-ab1d-cbc0a9a52490,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad0196ec70ac5fb2dbee95bbec6a35b3fc1015ffe70055ff537460cefe09c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.579961 kubelet[2755]: E0904 16:24:11.579910 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad0196ec70ac5fb2dbee95bbec6a35b3fc1015ffe70055ff537460cefe09c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.580035 kubelet[2755]: E0904 16:24:11.579977 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad0196ec70ac5fb2dbee95bbec6a35b3fc1015ffe70055ff537460cefe09c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495f75bd6-tmbsc" Sep 4 16:24:11.580035 kubelet[2755]: E0904 16:24:11.580008 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cad0196ec70ac5fb2dbee95bbec6a35b3fc1015ffe70055ff537460cefe09c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8495f75bd6-tmbsc" Sep 4 16:24:11.580096 kubelet[2755]: E0904 16:24:11.580072 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8495f75bd6-tmbsc_calico-apiserver(20cf1824-6aae-436f-ab1d-cbc0a9a52490)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8495f75bd6-tmbsc_calico-apiserver(20cf1824-6aae-436f-ab1d-cbc0a9a52490)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cad0196ec70ac5fb2dbee95bbec6a35b3fc1015ffe70055ff537460cefe09c3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8495f75bd6-tmbsc" podUID="20cf1824-6aae-436f-ab1d-cbc0a9a52490" Sep 4 16:24:11.587317 containerd[1592]: time="2025-09-04T16:24:11.587202604Z" level=error msg="Failed to destroy network for sandbox \"a698bdd9b4be6b13e9707d964988e9a3c5721f9f24e7bb592edf76feb13115fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.588734 containerd[1592]: time="2025-09-04T16:24:11.588682938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rsww,Uid:b50476ad-7347-42a4-a89a-beeb6add2c7c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a698bdd9b4be6b13e9707d964988e9a3c5721f9f24e7bb592edf76feb13115fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.589005 kubelet[2755]: E0904 16:24:11.588957 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a698bdd9b4be6b13e9707d964988e9a3c5721f9f24e7bb592edf76feb13115fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.589102 kubelet[2755]: E0904 16:24:11.589025 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a698bdd9b4be6b13e9707d964988e9a3c5721f9f24e7bb592edf76feb13115fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rsww" Sep 4 16:24:11.589102 kubelet[2755]: E0904 16:24:11.589059 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a698bdd9b4be6b13e9707d964988e9a3c5721f9f24e7bb592edf76feb13115fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rsww" Sep 4 16:24:11.589151 kubelet[2755]: E0904 16:24:11.589112 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rsww_kube-system(b50476ad-7347-42a4-a89a-beeb6add2c7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rsww_kube-system(b50476ad-7347-42a4-a89a-beeb6add2c7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a698bdd9b4be6b13e9707d964988e9a3c5721f9f24e7bb592edf76feb13115fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rsww" podUID="b50476ad-7347-42a4-a89a-beeb6add2c7c" Sep 4 16:24:11.592942 containerd[1592]: time="2025-09-04T16:24:11.592903550Z" level=error msg="Failed to destroy network for sandbox \"0208f89d932eee6bf3b18fd72bf9eb10a0a062c399d9c1469f9761d2f42cac88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.594038 containerd[1592]: time="2025-09-04T16:24:11.594006124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96j5s,Uid:85dfe534-9386-4bdb-9f8d-60c5f85bac15,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0208f89d932eee6bf3b18fd72bf9eb10a0a062c399d9c1469f9761d2f42cac88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.594200 kubelet[2755]: E0904 16:24:11.594176 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0208f89d932eee6bf3b18fd72bf9eb10a0a062c399d9c1469f9761d2f42cac88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.594255 kubelet[2755]: E0904 16:24:11.594215 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0208f89d932eee6bf3b18fd72bf9eb10a0a062c399d9c1469f9761d2f42cac88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-96j5s" Sep 4 16:24:11.594255 kubelet[2755]: E0904 16:24:11.594234 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0208f89d932eee6bf3b18fd72bf9eb10a0a062c399d9c1469f9761d2f42cac88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-96j5s" Sep 4 16:24:11.594330 kubelet[2755]: E0904 16:24:11.594267 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-96j5s_kube-system(85dfe534-9386-4bdb-9f8d-60c5f85bac15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-96j5s_kube-system(85dfe534-9386-4bdb-9f8d-60c5f85bac15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0208f89d932eee6bf3b18fd72bf9eb10a0a062c399d9c1469f9761d2f42cac88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-96j5s" podUID="85dfe534-9386-4bdb-9f8d-60c5f85bac15" Sep 4 16:24:11.725268 systemd[1]: Created slice kubepods-besteffort-pod2c4deb78_6061_407f_9164_fbcdb204310d.slice - libcontainer container kubepods-besteffort-pod2c4deb78_6061_407f_9164_fbcdb204310d.slice. Sep 4 16:24:11.727957 containerd[1592]: time="2025-09-04T16:24:11.727914766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tl25,Uid:2c4deb78-6061-407f-9164-fbcdb204310d,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:11.818803 containerd[1592]: time="2025-09-04T16:24:11.818742869Z" level=error msg="Failed to destroy network for sandbox \"ef7e1c62950060cbaed0022bd744667e8fc60cdf1cdf4cc5b70e13a9a8b89bf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.820080 containerd[1592]: time="2025-09-04T16:24:11.820042605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tl25,Uid:2c4deb78-6061-407f-9164-fbcdb204310d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7e1c62950060cbaed0022bd744667e8fc60cdf1cdf4cc5b70e13a9a8b89bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.820336 kubelet[2755]: E0904 16:24:11.820289 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7e1c62950060cbaed0022bd744667e8fc60cdf1cdf4cc5b70e13a9a8b89bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 16:24:11.820419 kubelet[2755]: E0904 16:24:11.820365 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7e1c62950060cbaed0022bd744667e8fc60cdf1cdf4cc5b70e13a9a8b89bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6tl25" Sep 4 16:24:11.820419 kubelet[2755]: E0904 16:24:11.820394 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef7e1c62950060cbaed0022bd744667e8fc60cdf1cdf4cc5b70e13a9a8b89bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6tl25" Sep 4 16:24:11.820498 kubelet[2755]: E0904 16:24:11.820462 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6tl25_calico-system(2c4deb78-6061-407f-9164-fbcdb204310d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6tl25_calico-system(2c4deb78-6061-407f-9164-fbcdb204310d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef7e1c62950060cbaed0022bd744667e8fc60cdf1cdf4cc5b70e13a9a8b89bf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6tl25" podUID="2c4deb78-6061-407f-9164-fbcdb204310d" Sep 4 16:24:11.831052 containerd[1592]: time="2025-09-04T16:24:11.831005907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 16:24:14.727117 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:39376.service - OpenSSH per-connection server daemon (10.0.0.1:39376). Sep 4 16:24:14.800605 sshd[3853]: Accepted publickey for core from 10.0.0.1 port 39376 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:14.802427 sshd-session[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:14.807149 systemd-logind[1574]: New session 8 of user core. Sep 4 16:24:14.814014 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 16:24:14.926354 sshd[3856]: Connection closed by 10.0.0.1 port 39376 Sep 4 16:24:14.926670 sshd-session[3853]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:14.931610 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:39376.service: Deactivated successfully. Sep 4 16:24:14.933517 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 16:24:14.934565 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Sep 4 16:24:14.936068 systemd-logind[1574]: Removed session 8. Sep 4 16:24:19.940509 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:42426.service - OpenSSH per-connection server daemon (10.0.0.1:42426). Sep 4 16:24:20.012776 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 42426 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:20.014641 sshd-session[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:20.020014 systemd-logind[1574]: New session 9 of user core. Sep 4 16:24:20.032991 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 16:24:20.192256 sshd[3877]: Connection closed by 10.0.0.1 port 42426 Sep 4 16:24:20.194564 sshd-session[3874]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:20.200683 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:42426.service: Deactivated successfully. Sep 4 16:24:20.203134 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 16:24:20.205260 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Sep 4 16:24:20.206465 systemd-logind[1574]: Removed session 9. Sep 4 16:24:20.850045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847767745.mount: Deactivated successfully. Sep 4 16:24:22.016983 containerd[1592]: time="2025-09-04T16:24:22.016807412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:22.018064 containerd[1592]: time="2025-09-04T16:24:22.018011872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 16:24:22.019495 containerd[1592]: time="2025-09-04T16:24:22.019459633Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:22.022810 containerd[1592]: time="2025-09-04T16:24:22.022529306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:22.023081 containerd[1592]: time="2025-09-04T16:24:22.023057214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.191997689s" Sep 4 16:24:22.023139 containerd[1592]: time="2025-09-04T16:24:22.023087140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 16:24:22.035464 containerd[1592]: time="2025-09-04T16:24:22.035404578Z" level=info msg="CreateContainer within sandbox \"74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 16:24:22.054421 containerd[1592]: time="2025-09-04T16:24:22.054293895Z" level=info msg="Container e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:22.076171 containerd[1592]: time="2025-09-04T16:24:22.076061121Z" level=info msg="CreateContainer within sandbox \"74a6384a3c6e73a5c5f52bd0668d3c32160012d910bbe54ac65411101e5ad12b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1\"" Sep 4 16:24:22.076622 containerd[1592]: time="2025-09-04T16:24:22.076588498Z" level=info msg="StartContainer for \"e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1\"" Sep 4 16:24:22.078162 containerd[1592]: time="2025-09-04T16:24:22.078136404Z" level=info msg="connecting to shim e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1" address="unix:///run/containerd/s/a0bff368dc69f2e8f9664e5b899083093014c3a43bbb46ec3f642753a54d726b" protocol=ttrpc version=3 Sep 4 16:24:22.099009 systemd[1]: Started cri-containerd-e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1.scope - libcontainer container e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1. Sep 4 16:24:22.238979 containerd[1592]: time="2025-09-04T16:24:22.238918733Z" level=info msg="StartContainer for \"e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1\" returns successfully" Sep 4 16:24:22.256630 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 16:24:22.257473 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 16:24:22.393967 kubelet[2755]: I0904 16:24:22.393714 2755 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-ca-bundle\") pod \"f2d5c1af-92be-4caf-867d-5ba5192435a8\" (UID: \"f2d5c1af-92be-4caf-867d-5ba5192435a8\") " Sep 4 16:24:22.393967 kubelet[2755]: I0904 16:24:22.393770 2755 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-backend-key-pair\") pod \"f2d5c1af-92be-4caf-867d-5ba5192435a8\" (UID: \"f2d5c1af-92be-4caf-867d-5ba5192435a8\") " Sep 4 16:24:22.393967 kubelet[2755]: I0904 16:24:22.393790 2755 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmkz\" (UniqueName: \"kubernetes.io/projected/f2d5c1af-92be-4caf-867d-5ba5192435a8-kube-api-access-xwmkz\") pod \"f2d5c1af-92be-4caf-867d-5ba5192435a8\" (UID: \"f2d5c1af-92be-4caf-867d-5ba5192435a8\") " Sep 4 16:24:22.396128 kubelet[2755]: I0904 16:24:22.395609 2755 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2d5c1af-92be-4caf-867d-5ba5192435a8" (UID: "f2d5c1af-92be-4caf-867d-5ba5192435a8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 16:24:22.398847 kubelet[2755]: I0904 16:24:22.398815 2755 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d5c1af-92be-4caf-867d-5ba5192435a8-kube-api-access-xwmkz" (OuterVolumeSpecName: "kube-api-access-xwmkz") pod "f2d5c1af-92be-4caf-867d-5ba5192435a8" (UID: "f2d5c1af-92be-4caf-867d-5ba5192435a8"). InnerVolumeSpecName "kube-api-access-xwmkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 16:24:22.401356 kubelet[2755]: I0904 16:24:22.401323 2755 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2d5c1af-92be-4caf-867d-5ba5192435a8" (UID: "f2d5c1af-92be-4caf-867d-5ba5192435a8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 16:24:22.494803 kubelet[2755]: I0904 16:24:22.494761 2755 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 16:24:22.494803 kubelet[2755]: I0904 16:24:22.494795 2755 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2d5c1af-92be-4caf-867d-5ba5192435a8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 4 16:24:22.494803 kubelet[2755]: I0904 16:24:22.494804 2755 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwmkz\" (UniqueName: \"kubernetes.io/projected/f2d5c1af-92be-4caf-867d-5ba5192435a8-kube-api-access-xwmkz\") on node \"localhost\" DevicePath \"\"" Sep 4 16:24:22.729394 systemd[1]: Removed slice kubepods-besteffort-podf2d5c1af_92be_4caf_867d_5ba5192435a8.slice - libcontainer container kubepods-besteffort-podf2d5c1af_92be_4caf_867d_5ba5192435a8.slice. Sep 4 16:24:22.887201 kubelet[2755]: I0904 16:24:22.887126 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l6jk9" podStartSLOduration=1.94588332 podStartE2EDuration="31.887105377s" podCreationTimestamp="2025-09-04 16:23:51 +0000 UTC" firstStartedPulling="2025-09-04 16:23:52.08284123 +0000 UTC m=+21.455148532" lastFinishedPulling="2025-09-04 16:24:22.024063287 +0000 UTC m=+51.396370589" observedRunningTime="2025-09-04 16:24:22.876242551 +0000 UTC m=+52.248549853" watchObservedRunningTime="2025-09-04 16:24:22.887105377 +0000 UTC m=+52.259412679" Sep 4 16:24:22.926485 systemd[1]: Created slice kubepods-besteffort-pod76d95da6_007d_4aa1_a6f2_2266e6b3793d.slice - libcontainer container kubepods-besteffort-pod76d95da6_007d_4aa1_a6f2_2266e6b3793d.slice. Sep 4 16:24:22.997492 kubelet[2755]: I0904 16:24:22.997347 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxdm\" (UniqueName: \"kubernetes.io/projected/76d95da6-007d-4aa1-a6f2-2266e6b3793d-kube-api-access-qkxdm\") pod \"whisker-7cd8b6f4f4-fdk6g\" (UID: \"76d95da6-007d-4aa1-a6f2-2266e6b3793d\") " pod="calico-system/whisker-7cd8b6f4f4-fdk6g" Sep 4 16:24:22.997492 kubelet[2755]: I0904 16:24:22.997399 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76d95da6-007d-4aa1-a6f2-2266e6b3793d-whisker-ca-bundle\") pod \"whisker-7cd8b6f4f4-fdk6g\" (UID: \"76d95da6-007d-4aa1-a6f2-2266e6b3793d\") " pod="calico-system/whisker-7cd8b6f4f4-fdk6g" Sep 4 16:24:22.997492 kubelet[2755]: I0904 16:24:22.997456 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76d95da6-007d-4aa1-a6f2-2266e6b3793d-whisker-backend-key-pair\") pod \"whisker-7cd8b6f4f4-fdk6g\" (UID: \"76d95da6-007d-4aa1-a6f2-2266e6b3793d\") " pod="calico-system/whisker-7cd8b6f4f4-fdk6g" Sep 4 16:24:23.029640 systemd[1]: var-lib-kubelet-pods-f2d5c1af\x2d92be\x2d4caf\x2d867d\x2d5ba5192435a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwmkz.mount: Deactivated successfully. Sep 4 16:24:23.029772 systemd[1]: var-lib-kubelet-pods-f2d5c1af\x2d92be\x2d4caf\x2d867d\x2d5ba5192435a8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 16:24:23.229548 containerd[1592]: time="2025-09-04T16:24:23.229494213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cd8b6f4f4-fdk6g,Uid:76d95da6-007d-4aa1-a6f2-2266e6b3793d,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:23.602039 systemd-networkd[1478]: cali9a66c63763e: Link UP Sep 4 16:24:23.602254 systemd-networkd[1478]: cali9a66c63763e: Gained carrier Sep 4 16:24:23.629083 containerd[1592]: 2025-09-04 16:24:23.434 [INFO][3961] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 16:24:23.629083 containerd[1592]: 2025-09-04 16:24:23.452 [INFO][3961] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0 whisker-7cd8b6f4f4- calico-system 76d95da6-007d-4aa1-a6f2-2266e6b3793d 1030 0 2025-09-04 16:24:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cd8b6f4f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7cd8b6f4f4-fdk6g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9a66c63763e [] [] }} ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-" Sep 4 16:24:23.629083 containerd[1592]: 2025-09-04 16:24:23.453 [INFO][3961] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" Sep 4 16:24:23.629083 containerd[1592]: 2025-09-04 16:24:23.539 [INFO][3975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" HandleID="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Workload="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.539 [INFO][3975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" HandleID="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Workload="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123a40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7cd8b6f4f4-fdk6g", "timestamp":"2025-09-04 16:24:23.539064441 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.539 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.540 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.541 [INFO][3975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.550 [INFO][3975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" host="localhost" Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.559 [INFO][3975] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.564 [INFO][3975] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.566 [INFO][3975] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.568 [INFO][3975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:23.629414 containerd[1592]: 2025-09-04 16:24:23.568 [INFO][3975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" host="localhost" Sep 4 16:24:23.629704 containerd[1592]: 2025-09-04 16:24:23.569 [INFO][3975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a Sep 4 16:24:23.629704 containerd[1592]: 2025-09-04 16:24:23.574 [INFO][3975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" host="localhost" Sep 4 16:24:23.629704 containerd[1592]: 2025-09-04 16:24:23.580 [INFO][3975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" host="localhost" Sep 4 16:24:23.629704 containerd[1592]: 2025-09-04 16:24:23.580 [INFO][3975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" host="localhost" Sep 4 16:24:23.629704 containerd[1592]: 2025-09-04 16:24:23.580 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:23.629704 containerd[1592]: 2025-09-04 16:24:23.580 [INFO][3975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" HandleID="k8s-pod-network.72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Workload="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" Sep 4 16:24:23.630165 containerd[1592]: 2025-09-04 16:24:23.586 [INFO][3961] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0", GenerateName:"whisker-7cd8b6f4f4-", Namespace:"calico-system", SelfLink:"", UID:"76d95da6-007d-4aa1-a6f2-2266e6b3793d", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cd8b6f4f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7cd8b6f4f4-fdk6g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9a66c63763e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:23.630165 containerd[1592]: 2025-09-04 16:24:23.586 [INFO][3961] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" Sep 4 16:24:23.632584 containerd[1592]: 2025-09-04 16:24:23.586 [INFO][3961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a66c63763e ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" Sep 4 16:24:23.632584 containerd[1592]: 2025-09-04 16:24:23.604 [INFO][3961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" Sep 4 16:24:23.632646 containerd[1592]: 2025-09-04 16:24:23.605 [INFO][3961] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0", GenerateName:"whisker-7cd8b6f4f4-", Namespace:"calico-system", SelfLink:"", UID:"76d95da6-007d-4aa1-a6f2-2266e6b3793d", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cd8b6f4f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a", Pod:"whisker-7cd8b6f4f4-fdk6g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9a66c63763e", MAC:"ae:dd:f4:7c:9a:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:23.632730 containerd[1592]: 2025-09-04 16:24:23.617 [INFO][3961] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" Namespace="calico-system" Pod="whisker-7cd8b6f4f4-fdk6g" WorkloadEndpoint="localhost-k8s-whisker--7cd8b6f4f4--fdk6g-eth0" Sep 4 16:24:23.867982 containerd[1592]: time="2025-09-04T16:24:23.867844136Z" level=info msg="connecting to shim 72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a" address="unix:///run/containerd/s/2fd7e8231fe5b9eda275f6c454ec3e4c19fb39e7d3265b3be29f4b25dd112b25" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:23.901979 systemd[1]: Started cri-containerd-72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a.scope - libcontainer container 72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a. Sep 4 16:24:23.923114 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:24.000503 containerd[1592]: time="2025-09-04T16:24:24.000454055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1\" id:\"c21fa71430fcbfc74fd8264dc4f80ee18aa76f09701d01953f3c52dde9a2342e\" pid:4169 exit_status:1 exited_at:{seconds:1757003064 nanos:34147}" Sep 4 16:24:24.026587 systemd-networkd[1478]: vxlan.calico: Link UP Sep 4 16:24:24.026598 systemd-networkd[1478]: vxlan.calico: Gained carrier Sep 4 16:24:24.092034 containerd[1592]: time="2025-09-04T16:24:24.091957473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cd8b6f4f4-fdk6g,Uid:76d95da6-007d-4aa1-a6f2-2266e6b3793d,Namespace:calico-system,Attempt:0,} returns sandbox id \"72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a\"" Sep 4 16:24:24.098170 containerd[1592]: time="2025-09-04T16:24:24.097181201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 16:24:24.719476 kubelet[2755]: E0904 16:24:24.719438 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:24.719476 kubelet[2755]: E0904 16:24:24.719477 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:24.720228 containerd[1592]: time="2025-09-04T16:24:24.720174088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rsww,Uid:b50476ad-7347-42a4-a89a-beeb6add2c7c,Namespace:kube-system,Attempt:0,}" Sep 4 16:24:24.720924 containerd[1592]: time="2025-09-04T16:24:24.720508127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96j5s,Uid:85dfe534-9386-4bdb-9f8d-60c5f85bac15,Namespace:kube-system,Attempt:0,}" Sep 4 16:24:24.721641 kubelet[2755]: I0904 16:24:24.721620 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d5c1af-92be-4caf-867d-5ba5192435a8" path="/var/lib/kubelet/pods/f2d5c1af-92be-4caf-867d-5ba5192435a8/volumes" Sep 4 16:24:24.816187 systemd-networkd[1478]: cali99f2fa24bca: Link UP Sep 4 16:24:24.817770 systemd-networkd[1478]: cali99f2fa24bca: Gained carrier Sep 4 16:24:24.829803 containerd[1592]: 2025-09-04 16:24:24.753 [INFO][4259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6rsww-eth0 coredns-668d6bf9bc- kube-system b50476ad-7347-42a4-a89a-beeb6add2c7c 900 0 2025-09-04 16:23:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6rsww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali99f2fa24bca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-" Sep 4 16:24:24.829803 containerd[1592]: 2025-09-04 16:24:24.753 [INFO][4259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" Sep 4 16:24:24.829803 containerd[1592]: 2025-09-04 16:24:24.781 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" HandleID="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Workload="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.781 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" HandleID="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Workload="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c76c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6rsww", "timestamp":"2025-09-04 16:24:24.781636052 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.781 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.782 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.782 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.791 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" host="localhost" Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.795 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.798 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.799 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.801 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:24.830047 containerd[1592]: 2025-09-04 16:24:24.801 [INFO][4285] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" host="localhost" Sep 4 16:24:24.830264 containerd[1592]: 2025-09-04 16:24:24.802 [INFO][4285] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4 Sep 4 16:24:24.830264 containerd[1592]: 2025-09-04 16:24:24.806 [INFO][4285] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" host="localhost" Sep 4 16:24:24.830264 containerd[1592]: 2025-09-04 16:24:24.810 [INFO][4285] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" host="localhost" Sep 4 16:24:24.830264 containerd[1592]: 2025-09-04 16:24:24.810 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" host="localhost" Sep 4 16:24:24.830264 containerd[1592]: 2025-09-04 16:24:24.810 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:24.830264 containerd[1592]: 2025-09-04 16:24:24.810 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" HandleID="k8s-pod-network.78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Workload="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" Sep 4 16:24:24.830373 containerd[1592]: 2025-09-04 16:24:24.813 [INFO][4259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6rsww-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b50476ad-7347-42a4-a89a-beeb6add2c7c", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6rsww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99f2fa24bca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:24.830469 containerd[1592]: 2025-09-04 16:24:24.813 [INFO][4259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" Sep 4 16:24:24.830469 containerd[1592]: 2025-09-04 16:24:24.813 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99f2fa24bca ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" Sep 4 16:24:24.830469 containerd[1592]: 2025-09-04 16:24:24.818 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" Sep 4 16:24:24.830533 containerd[1592]: 2025-09-04 16:24:24.818 [INFO][4259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6rsww-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b50476ad-7347-42a4-a89a-beeb6add2c7c", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4", Pod:"coredns-668d6bf9bc-6rsww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99f2fa24bca", MAC:"de:2a:52:43:f1:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:24.830533 containerd[1592]: 2025-09-04 16:24:24.826 [INFO][4259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rsww" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6rsww-eth0" Sep 4 16:24:24.853130 containerd[1592]: time="2025-09-04T16:24:24.852635215Z" level=info msg="connecting to shim 78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4" address="unix:///run/containerd/s/42f121be07c03333e4880b98132ba1fb02c75ad2ad2be724c9ebe613189fe935" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:24.878002 systemd[1]: Started cri-containerd-78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4.scope - libcontainer container 78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4. Sep 4 16:24:24.892085 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:24.936183 containerd[1592]: time="2025-09-04T16:24:24.936144698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rsww,Uid:b50476ad-7347-42a4-a89a-beeb6add2c7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4\"" Sep 4 16:24:24.937280 kubelet[2755]: E0904 16:24:24.937256 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:24.942477 containerd[1592]: time="2025-09-04T16:24:24.942416259Z" level=info msg="CreateContainer within sandbox \"78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:24:24.946401 systemd-networkd[1478]: cali0785a48f7cf: Link UP Sep 4 16:24:24.946591 systemd-networkd[1478]: cali0785a48f7cf: Gained carrier Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.764 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--96j5s-eth0 coredns-668d6bf9bc- kube-system 85dfe534-9386-4bdb-9f8d-60c5f85bac15 901 0 2025-09-04 16:23:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-96j5s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0785a48f7cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.764 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.791 [INFO][4294] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" HandleID="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Workload="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.792 [INFO][4294] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" HandleID="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Workload="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-96j5s", "timestamp":"2025-09-04 16:24:24.791341502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.792 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.810 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.810 [INFO][4294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.893 [INFO][4294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.903 [INFO][4294] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.908 [INFO][4294] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.910 [INFO][4294] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.912 [INFO][4294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.912 [INFO][4294] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.914 [INFO][4294] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8 Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.926 [INFO][4294] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.933 [INFO][4294] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.934 [INFO][4294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" host="localhost" Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.934 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:24.961601 containerd[1592]: 2025-09-04 16:24:24.934 [INFO][4294] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" HandleID="k8s-pod-network.61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Workload="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" Sep 4 16:24:24.962178 containerd[1592]: 2025-09-04 16:24:24.939 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--96j5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85dfe534-9386-4bdb-9f8d-60c5f85bac15", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-96j5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0785a48f7cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:24.962178 containerd[1592]: 2025-09-04 16:24:24.940 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" Sep 4 16:24:24.962178 containerd[1592]: 2025-09-04 16:24:24.940 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0785a48f7cf ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" Sep 4 16:24:24.962178 containerd[1592]: 2025-09-04 16:24:24.944 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" Sep 4 16:24:24.962178 containerd[1592]: 2025-09-04 16:24:24.945 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--96j5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85dfe534-9386-4bdb-9f8d-60c5f85bac15", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8", Pod:"coredns-668d6bf9bc-96j5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0785a48f7cf", MAC:"e6:77:9a:f8:dc:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:24.962178 containerd[1592]: 2025-09-04 16:24:24.955 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-96j5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96j5s-eth0" Sep 4 16:24:24.963021 containerd[1592]: time="2025-09-04T16:24:24.961568468Z" level=info msg="Container 0c2dcfb40611af2c62d40463ca860251704d3d4e31c050568abc8b374576960c: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:24.968127 containerd[1592]: time="2025-09-04T16:24:24.968087037Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1\" id:\"b2b1a431ec58820ea59ca470bd2b9ade0753631b039860b6d8193e0ba380120f\" pid:4360 exit_status:1 exited_at:{seconds:1757003064 nanos:967832275}" Sep 4 16:24:24.974223 containerd[1592]: time="2025-09-04T16:24:24.973941734Z" level=info msg="CreateContainer within sandbox \"78e2687a295480540c57cf0d340719036ceab299aee16d576c42980c004491d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c2dcfb40611af2c62d40463ca860251704d3d4e31c050568abc8b374576960c\"" Sep 4 16:24:24.975637 containerd[1592]: time="2025-09-04T16:24:24.975283813Z" level=info msg="StartContainer for \"0c2dcfb40611af2c62d40463ca860251704d3d4e31c050568abc8b374576960c\"" Sep 4 16:24:24.976256 containerd[1592]: time="2025-09-04T16:24:24.976220050Z" level=info msg="connecting to shim 0c2dcfb40611af2c62d40463ca860251704d3d4e31c050568abc8b374576960c" address="unix:///run/containerd/s/42f121be07c03333e4880b98132ba1fb02c75ad2ad2be724c9ebe613189fe935" protocol=ttrpc version=3 Sep 4 16:24:24.996998 containerd[1592]: time="2025-09-04T16:24:24.996820364Z" level=info msg="connecting to shim 61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8" address="unix:///run/containerd/s/8df44c00cfad08813eeedaae2b08e7194e5a9887fca676a5e4cfecf59cd2952e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:25.002076 systemd[1]: Started cri-containerd-0c2dcfb40611af2c62d40463ca860251704d3d4e31c050568abc8b374576960c.scope - libcontainer container 0c2dcfb40611af2c62d40463ca860251704d3d4e31c050568abc8b374576960c. Sep 4 16:24:25.049288 systemd[1]: Started cri-containerd-61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8.scope - libcontainer container 61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8. Sep 4 16:24:25.057821 containerd[1592]: time="2025-09-04T16:24:25.057675838Z" level=info msg="StartContainer for \"0c2dcfb40611af2c62d40463ca860251704d3d4e31c050568abc8b374576960c\" returns successfully" Sep 4 16:24:25.070531 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:25.106635 containerd[1592]: time="2025-09-04T16:24:25.106549862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96j5s,Uid:85dfe534-9386-4bdb-9f8d-60c5f85bac15,Namespace:kube-system,Attempt:0,} returns sandbox id \"61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8\"" Sep 4 16:24:25.109206 kubelet[2755]: E0904 16:24:25.109177 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:25.111427 containerd[1592]: time="2025-09-04T16:24:25.111376819Z" level=info msg="CreateContainer within sandbox \"61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 16:24:25.127198 containerd[1592]: time="2025-09-04T16:24:25.127135726Z" level=info msg="Container 4ee72a17221cc3a3d9f62f4843519bbccb41e5c4dc75238200f0f8019a7d11a3: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:25.130787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967077774.mount: Deactivated successfully. Sep 4 16:24:25.133523 containerd[1592]: time="2025-09-04T16:24:25.133495869Z" level=info msg="CreateContainer within sandbox \"61b81f49e6c34f0e6a80b09fd00a7f43f7bfc8af530051ad418bad9ab83378b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ee72a17221cc3a3d9f62f4843519bbccb41e5c4dc75238200f0f8019a7d11a3\"" Sep 4 16:24:25.134534 containerd[1592]: time="2025-09-04T16:24:25.134506715Z" level=info msg="StartContainer for \"4ee72a17221cc3a3d9f62f4843519bbccb41e5c4dc75238200f0f8019a7d11a3\"" Sep 4 16:24:25.135442 containerd[1592]: time="2025-09-04T16:24:25.135396587Z" level=info msg="connecting to shim 4ee72a17221cc3a3d9f62f4843519bbccb41e5c4dc75238200f0f8019a7d11a3" address="unix:///run/containerd/s/8df44c00cfad08813eeedaae2b08e7194e5a9887fca676a5e4cfecf59cd2952e" protocol=ttrpc version=3 Sep 4 16:24:25.161114 systemd[1]: Started cri-containerd-4ee72a17221cc3a3d9f62f4843519bbccb41e5c4dc75238200f0f8019a7d11a3.scope - libcontainer container 4ee72a17221cc3a3d9f62f4843519bbccb41e5c4dc75238200f0f8019a7d11a3. Sep 4 16:24:25.194974 containerd[1592]: time="2025-09-04T16:24:25.194595382Z" level=info msg="StartContainer for \"4ee72a17221cc3a3d9f62f4843519bbccb41e5c4dc75238200f0f8019a7d11a3\" returns successfully" Sep 4 16:24:25.206656 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:42440.service - OpenSSH per-connection server daemon (10.0.0.1:42440). Sep 4 16:24:25.283433 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 42440 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:25.285334 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:25.290623 systemd-logind[1574]: New session 10 of user core. Sep 4 16:24:25.298066 systemd-networkd[1478]: cali9a66c63763e: Gained IPv6LL Sep 4 16:24:25.299076 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 16:24:25.428499 sshd[4512]: Connection closed by 10.0.0.1 port 42440 Sep 4 16:24:25.428828 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:25.433794 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:42440.service: Deactivated successfully. Sep 4 16:24:25.435691 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 16:24:25.436540 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Sep 4 16:24:25.437537 systemd-logind[1574]: Removed session 10. Sep 4 16:24:25.720798 containerd[1592]: time="2025-09-04T16:24:25.720358834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-tmbsc,Uid:20cf1824-6aae-436f-ab1d-cbc0a9a52490,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:24:25.720798 containerd[1592]: time="2025-09-04T16:24:25.720415139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rcsbj,Uid:31770e90-6b62-4986-968c-c7d212e719b2,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:25.720798 containerd[1592]: time="2025-09-04T16:24:25.720643592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b587f946-s494h,Uid:f11388c6-8919-49a2-ae5a-0eb9194c9913,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:25.720798 containerd[1592]: time="2025-09-04T16:24:25.720739620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tl25,Uid:2c4deb78-6061-407f-9164-fbcdb204310d,Namespace:calico-system,Attempt:0,}" Sep 4 16:24:25.811123 systemd-networkd[1478]: vxlan.calico: Gained IPv6LL Sep 4 16:24:25.861413 systemd-networkd[1478]: calie4a3f63fca8: Link UP Sep 4 16:24:25.862348 systemd-networkd[1478]: calie4a3f63fca8: Gained carrier Sep 4 16:24:25.879436 kubelet[2755]: E0904 16:24:25.879406 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.775 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6tl25-eth0 csi-node-driver- calico-system 2c4deb78-6061-407f-9164-fbcdb204310d 761 0 2025-09-04 16:23:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6tl25 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4a3f63fca8 [] [] }} ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.775 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-eth0" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.825 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" HandleID="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Workload="localhost-k8s-csi--node--driver--6tl25-eth0" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.825 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" HandleID="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Workload="localhost-k8s-csi--node--driver--6tl25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6tl25", "timestamp":"2025-09-04 16:24:25.825544093 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.826 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.826 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.826 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.833 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.839 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.843 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.844 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.846 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.846 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.847 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0 Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.850 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.855 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.855 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" host="localhost" Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.855 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:25.880116 containerd[1592]: 2025-09-04 16:24:25.855 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" HandleID="k8s-pod-network.4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Workload="localhost-k8s-csi--node--driver--6tl25-eth0" Sep 4 16:24:25.880712 containerd[1592]: 2025-09-04 16:24:25.857 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6tl25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c4deb78-6061-407f-9164-fbcdb204310d", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6tl25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4a3f63fca8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:25.880712 containerd[1592]: 2025-09-04 16:24:25.857 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-eth0" Sep 4 16:24:25.880712 containerd[1592]: 2025-09-04 16:24:25.858 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4a3f63fca8 ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-eth0" Sep 4 16:24:25.880712 containerd[1592]: 2025-09-04 16:24:25.862 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-eth0" Sep 4 16:24:25.880712 containerd[1592]: 2025-09-04 16:24:25.862 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6tl25-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c4deb78-6061-407f-9164-fbcdb204310d", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0", Pod:"csi-node-driver-6tl25", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4a3f63fca8", MAC:"aa:76:08:b8:35:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:25.880712 containerd[1592]: 2025-09-04 16:24:25.874 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" Namespace="calico-system" Pod="csi-node-driver-6tl25" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tl25-eth0" Sep 4 16:24:25.882773 kubelet[2755]: E0904 16:24:25.882308 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:25.895116 kubelet[2755]: I0904 16:24:25.895032 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6rsww" podStartSLOduration=47.894957875 podStartE2EDuration="47.894957875s" podCreationTimestamp="2025-09-04 16:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:24:25.894127584 +0000 UTC m=+55.266434876" watchObservedRunningTime="2025-09-04 16:24:25.894957875 +0000 UTC m=+55.267265177" Sep 4 16:24:25.911900 containerd[1592]: time="2025-09-04T16:24:25.911790224Z" level=info msg="connecting to shim 4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0" address="unix:///run/containerd/s/6fb45b5063348389a3c57f4ed95646752bdf98618eb76fb2c4dbedace1283565" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:25.935450 kubelet[2755]: I0904 16:24:25.935381 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-96j5s" podStartSLOduration=47.935358193 podStartE2EDuration="47.935358193s" podCreationTimestamp="2025-09-04 16:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 16:24:25.935015537 +0000 UTC m=+55.307322839" watchObservedRunningTime="2025-09-04 16:24:25.935358193 +0000 UTC m=+55.307665495" Sep 4 16:24:25.946327 systemd[1]: Started cri-containerd-4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0.scope - libcontainer container 4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0. Sep 4 16:24:26.005504 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:26.006793 systemd-networkd[1478]: calie03691ebd1b: Link UP Sep 4 16:24:26.007966 systemd-networkd[1478]: calie03691ebd1b: Gained carrier Sep 4 16:24:26.193347 containerd[1592]: time="2025-09-04T16:24:26.193295783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tl25,Uid:2c4deb78-6061-407f-9164-fbcdb204310d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0\"" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.774 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0 calico-apiserver-8495f75bd6- calico-apiserver 20cf1824-6aae-436f-ab1d-cbc0a9a52490 898 0 2025-09-04 16:23:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8495f75bd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8495f75bd6-tmbsc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie03691ebd1b [] [] }} ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.775 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.831 [INFO][4592] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" HandleID="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Workload="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.831 [INFO][4592] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" HandleID="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Workload="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8495f75bd6-tmbsc", "timestamp":"2025-09-04 16:24:25.831716157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.832 [INFO][4592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.855 [INFO][4592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.855 [INFO][4592] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.937 [INFO][4592] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.949 [INFO][4592] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.959 [INFO][4592] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.962 [INFO][4592] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.969 [INFO][4592] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.969 [INFO][4592] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.973 [INFO][4592] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.982 [INFO][4592] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.994 [INFO][4592] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.994 [INFO][4592] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" host="localhost" Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.995 [INFO][4592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:26.294028 containerd[1592]: 2025-09-04 16:24:25.995 [INFO][4592] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" HandleID="k8s-pod-network.bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Workload="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" Sep 4 16:24:26.294575 containerd[1592]: 2025-09-04 16:24:25.999 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0", GenerateName:"calico-apiserver-8495f75bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"20cf1824-6aae-436f-ab1d-cbc0a9a52490", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495f75bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8495f75bd6-tmbsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie03691ebd1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.294575 containerd[1592]: 2025-09-04 16:24:26.000 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" Sep 4 16:24:26.294575 containerd[1592]: 2025-09-04 16:24:26.000 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie03691ebd1b ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" Sep 4 16:24:26.294575 containerd[1592]: 2025-09-04 16:24:26.008 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" Sep 4 16:24:26.294575 containerd[1592]: 2025-09-04 16:24:26.009 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0", GenerateName:"calico-apiserver-8495f75bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"20cf1824-6aae-436f-ab1d-cbc0a9a52490", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495f75bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d", Pod:"calico-apiserver-8495f75bd6-tmbsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie03691ebd1b", MAC:"1e:51:fc:ed:ba:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.294575 containerd[1592]: 2025-09-04 16:24:26.289 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-tmbsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--tmbsc-eth0" Sep 4 16:24:26.329933 systemd-networkd[1478]: cali17afafce478: Link UP Sep 4 16:24:26.332591 systemd-networkd[1478]: cali17afafce478: Gained carrier Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:25.782 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0 calico-kube-controllers-7b587f946- calico-system f11388c6-8919-49a2-ae5a-0eb9194c9913 890 0 2025-09-04 16:23:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b587f946 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b587f946-s494h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali17afafce478 [] [] }} ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:25.782 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:25.834 [INFO][4600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" HandleID="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Workload="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:25.834 [INFO][4600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" HandleID="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Workload="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b587f946-s494h", "timestamp":"2025-09-04 16:24:25.834746902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:25.834 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:25.995 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:25.995 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.288 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.297 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.303 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.306 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.309 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.309 [INFO][4600] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.311 [INFO][4600] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.317 [INFO][4600] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.323 [INFO][4600] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.323 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" host="localhost" Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.323 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:26.352899 containerd[1592]: 2025-09-04 16:24:26.323 [INFO][4600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" HandleID="k8s-pod-network.f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Workload="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" Sep 4 16:24:26.353597 containerd[1592]: 2025-09-04 16:24:26.326 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0", GenerateName:"calico-kube-controllers-7b587f946-", Namespace:"calico-system", SelfLink:"", UID:"f11388c6-8919-49a2-ae5a-0eb9194c9913", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b587f946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b587f946-s494h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17afafce478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.353597 containerd[1592]: 2025-09-04 16:24:26.326 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" Sep 4 16:24:26.353597 containerd[1592]: 2025-09-04 16:24:26.326 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17afafce478 ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" Sep 4 16:24:26.353597 containerd[1592]: 2025-09-04 16:24:26.333 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" Sep 4 16:24:26.353597 containerd[1592]: 2025-09-04 16:24:26.333 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0", GenerateName:"calico-kube-controllers-7b587f946-", Namespace:"calico-system", SelfLink:"", UID:"f11388c6-8919-49a2-ae5a-0eb9194c9913", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b587f946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a", Pod:"calico-kube-controllers-7b587f946-s494h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali17afafce478", MAC:"ce:9f:f4:aa:5f:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.353597 containerd[1592]: 2025-09-04 16:24:26.346 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" Namespace="calico-system" Pod="calico-kube-controllers-7b587f946-s494h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b587f946--s494h-eth0" Sep 4 16:24:26.368095 containerd[1592]: time="2025-09-04T16:24:26.367662466Z" level=info msg="connecting to shim bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d" address="unix:///run/containerd/s/5ca1ed329830489175dd78b5de00ef409a70996c19c83cdebc472bcf565a3fc5" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:26.386687 systemd-networkd[1478]: cali0785a48f7cf: Gained IPv6LL Sep 4 16:24:26.401650 systemd[1]: Started cri-containerd-bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d.scope - libcontainer container bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d. Sep 4 16:24:26.403484 containerd[1592]: time="2025-09-04T16:24:26.402552048Z" level=info msg="connecting to shim f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a" address="unix:///run/containerd/s/637b4a19013973fea90d1c82afd2e4354c286e9551f4bb430c9d4455c65ce047" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:26.429999 systemd[1]: Started cri-containerd-f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a.scope - libcontainer container f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a. Sep 4 16:24:26.436212 systemd-networkd[1478]: cali43867779986: Link UP Sep 4 16:24:26.438069 systemd-networkd[1478]: cali43867779986: Gained carrier Sep 4 16:24:26.440473 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:26.448141 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:25.793 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--rcsbj-eth0 goldmane-54d579b49d- calico-system 31770e90-6b62-4986-968c-c7d212e719b2 897 0 2025-09-04 16:23:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-rcsbj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali43867779986 [] [] }} ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:25.793 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:25.837 [INFO][4609] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" HandleID="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Workload="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:25.837 [INFO][4609] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" HandleID="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Workload="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000516ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-rcsbj", "timestamp":"2025-09-04 16:24:25.837407269 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:25.838 [INFO][4609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.323 [INFO][4609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.323 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.341 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.397 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.404 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.406 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.411 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.412 [INFO][4609] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.415 [INFO][4609] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.420 [INFO][4609] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.428 [INFO][4609] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.428 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" host="localhost" Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.428 [INFO][4609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:26.459432 containerd[1592]: 2025-09-04 16:24:26.428 [INFO][4609] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" HandleID="k8s-pod-network.20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Workload="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" Sep 4 16:24:26.459941 containerd[1592]: 2025-09-04 16:24:26.432 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--rcsbj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"31770e90-6b62-4986-968c-c7d212e719b2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-rcsbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43867779986", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.459941 containerd[1592]: 2025-09-04 16:24:26.432 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" Sep 4 16:24:26.459941 containerd[1592]: 2025-09-04 16:24:26.432 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43867779986 ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" Sep 4 16:24:26.459941 containerd[1592]: 2025-09-04 16:24:26.439 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" Sep 4 16:24:26.459941 containerd[1592]: 2025-09-04 16:24:26.440 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--rcsbj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"31770e90-6b62-4986-968c-c7d212e719b2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c", Pod:"goldmane-54d579b49d-rcsbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43867779986", MAC:"16:cd:a6:b0:16:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.459941 containerd[1592]: 2025-09-04 16:24:26.451 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" Namespace="calico-system" Pod="goldmane-54d579b49d-rcsbj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--rcsbj-eth0" Sep 4 16:24:26.482887 containerd[1592]: time="2025-09-04T16:24:26.482768957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-tmbsc,Uid:20cf1824-6aae-436f-ab1d-cbc0a9a52490,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d\"" Sep 4 16:24:26.486202 containerd[1592]: time="2025-09-04T16:24:26.486132393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b587f946-s494h,Uid:f11388c6-8919-49a2-ae5a-0eb9194c9913,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a\"" Sep 4 16:24:26.487393 containerd[1592]: time="2025-09-04T16:24:26.486909226Z" level=info msg="connecting to shim 20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c" address="unix:///run/containerd/s/9a35ec07d30fc40b38826705d6ee01dd6bc5904ec11df4ff8d857c67393638b1" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:26.520227 systemd[1]: Started cri-containerd-20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c.scope - libcontainer container 20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c. Sep 4 16:24:26.532986 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:26.562343 containerd[1592]: time="2025-09-04T16:24:26.562214985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rcsbj,Uid:31770e90-6b62-4986-968c-c7d212e719b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c\"" Sep 4 16:24:26.720421 containerd[1592]: time="2025-09-04T16:24:26.720372705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-58hqg,Uid:73e7c9c3-6db9-4664-8551-5fd14d249fee,Namespace:calico-apiserver,Attempt:0,}" Sep 4 16:24:26.834061 systemd-networkd[1478]: cali99f2fa24bca: Gained IPv6LL Sep 4 16:24:26.890840 kubelet[2755]: E0904 16:24:26.890799 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:26.891412 kubelet[2755]: E0904 16:24:26.891170 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:26.961598 systemd-networkd[1478]: calidd09225ee8d: Link UP Sep 4 16:24:26.962240 systemd-networkd[1478]: calidd09225ee8d: Gained carrier Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.893 [INFO][4847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0 calico-apiserver-8495f75bd6- calico-apiserver 73e7c9c3-6db9-4664-8551-5fd14d249fee 899 0 2025-09-04 16:23:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8495f75bd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8495f75bd6-58hqg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd09225ee8d [] [] }} ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.893 [INFO][4847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.922 [INFO][4862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" HandleID="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Workload="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.922 [INFO][4862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" HandleID="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Workload="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8495f75bd6-58hqg", "timestamp":"2025-09-04 16:24:26.922528024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.922 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.922 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.922 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.929 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.937 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.940 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.942 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.944 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.944 [INFO][4862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.945 [INFO][4862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729 Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.949 [INFO][4862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.956 [INFO][4862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.956 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" host="localhost" Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.956 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 16:24:26.973633 containerd[1592]: 2025-09-04 16:24:26.956 [INFO][4862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" HandleID="k8s-pod-network.8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Workload="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" Sep 4 16:24:26.974464 containerd[1592]: 2025-09-04 16:24:26.959 [INFO][4847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0", GenerateName:"calico-apiserver-8495f75bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73e7c9c3-6db9-4664-8551-5fd14d249fee", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495f75bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8495f75bd6-58hqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd09225ee8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.974464 containerd[1592]: 2025-09-04 16:24:26.959 [INFO][4847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" Sep 4 16:24:26.974464 containerd[1592]: 2025-09-04 16:24:26.959 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd09225ee8d ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" Sep 4 16:24:26.974464 containerd[1592]: 2025-09-04 16:24:26.962 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" Sep 4 16:24:26.974464 containerd[1592]: 2025-09-04 16:24:26.962 [INFO][4847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0", GenerateName:"calico-apiserver-8495f75bd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"73e7c9c3-6db9-4664-8551-5fd14d249fee", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 16, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8495f75bd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729", Pod:"calico-apiserver-8495f75bd6-58hqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd09225ee8d", MAC:"32:29:c2:ca:a0:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 16:24:26.974464 containerd[1592]: 2025-09-04 16:24:26.970 [INFO][4847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" Namespace="calico-apiserver" Pod="calico-apiserver-8495f75bd6-58hqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8495f75bd6--58hqg-eth0" Sep 4 16:24:27.000095 containerd[1592]: time="2025-09-04T16:24:27.000038347Z" level=info msg="connecting to shim 8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729" address="unix:///run/containerd/s/ea57718aba5c2cf01f05984258936e84cdb4aac4c5901960abac77909e227d59" namespace=k8s.io protocol=ttrpc version=3 Sep 4 16:24:27.033009 systemd[1]: Started cri-containerd-8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729.scope - libcontainer container 8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729. Sep 4 16:24:27.048082 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 16:24:27.080709 containerd[1592]: time="2025-09-04T16:24:27.080670356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8495f75bd6-58hqg,Uid:73e7c9c3-6db9-4664-8551-5fd14d249fee,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729\"" Sep 4 16:24:27.346089 systemd-networkd[1478]: calie4a3f63fca8: Gained IPv6LL Sep 4 16:24:27.730024 systemd-networkd[1478]: calie03691ebd1b: Gained IPv6LL Sep 4 16:24:27.899808 kubelet[2755]: E0904 16:24:27.899774 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:27.899808 kubelet[2755]: E0904 16:24:27.899795 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:28.242047 systemd-networkd[1478]: cali17afafce478: Gained IPv6LL Sep 4 16:24:28.434059 systemd-networkd[1478]: cali43867779986: Gained IPv6LL Sep 4 16:24:28.946093 systemd-networkd[1478]: calidd09225ee8d: Gained IPv6LL Sep 4 16:24:30.443704 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:59418.service - OpenSSH per-connection server daemon (10.0.0.1:59418). Sep 4 16:24:30.508248 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 59418 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:30.509569 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:30.513585 systemd-logind[1574]: New session 11 of user core. Sep 4 16:24:30.523997 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 16:24:30.635011 sshd[4934]: Connection closed by 10.0.0.1 port 59418 Sep 4 16:24:30.635358 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:30.645496 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:59418.service: Deactivated successfully. Sep 4 16:24:30.647407 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 16:24:30.648196 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Sep 4 16:24:30.650847 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:59428.service - OpenSSH per-connection server daemon (10.0.0.1:59428). Sep 4 16:24:30.651697 systemd-logind[1574]: Removed session 11. Sep 4 16:24:30.713807 sshd[4948]: Accepted publickey for core from 10.0.0.1 port 59428 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:30.715593 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:30.720696 systemd-logind[1574]: New session 12 of user core. Sep 4 16:24:30.729013 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 16:24:30.862715 sshd[4953]: Connection closed by 10.0.0.1 port 59428 Sep 4 16:24:30.863555 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:30.875550 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:59428.service: Deactivated successfully. Sep 4 16:24:30.878724 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 16:24:30.880442 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Sep 4 16:24:30.885325 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:59434.service - OpenSSH per-connection server daemon (10.0.0.1:59434). Sep 4 16:24:30.887643 systemd-logind[1574]: Removed session 12. Sep 4 16:24:30.934999 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 59434 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:30.936407 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:30.940667 systemd-logind[1574]: New session 13 of user core. Sep 4 16:24:30.952018 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 16:24:31.056516 sshd[4969]: Connection closed by 10.0.0.1 port 59434 Sep 4 16:24:31.056852 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:31.061069 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:59434.service: Deactivated successfully. Sep 4 16:24:31.063147 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 16:24:31.063859 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Sep 4 16:24:31.065305 systemd-logind[1574]: Removed session 13. Sep 4 16:24:33.016951 containerd[1592]: time="2025-09-04T16:24:33.016898196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:33.025431 containerd[1592]: time="2025-09-04T16:24:33.017580331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 16:24:33.025431 containerd[1592]: time="2025-09-04T16:24:33.018690943Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:33.025542 containerd[1592]: time="2025-09-04T16:24:33.021354981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 8.924123176s" Sep 4 16:24:33.025575 containerd[1592]: time="2025-09-04T16:24:33.025549332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 16:24:33.025944 containerd[1592]: time="2025-09-04T16:24:33.025922952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:33.026852 containerd[1592]: time="2025-09-04T16:24:33.026678970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 16:24:33.027664 containerd[1592]: time="2025-09-04T16:24:33.027638731Z" level=info msg="CreateContainer within sandbox \"72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 16:24:33.035424 containerd[1592]: time="2025-09-04T16:24:33.035390252Z" level=info msg="Container 0470a3e992113cc75412a802dae0649a4a042a3d9f4abd4d3b792bcb4d8d2725: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:33.043968 containerd[1592]: time="2025-09-04T16:24:33.043928160Z" level=info msg="CreateContainer within sandbox \"72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0470a3e992113cc75412a802dae0649a4a042a3d9f4abd4d3b792bcb4d8d2725\"" Sep 4 16:24:33.044528 containerd[1592]: time="2025-09-04T16:24:33.044504231Z" level=info msg="StartContainer for \"0470a3e992113cc75412a802dae0649a4a042a3d9f4abd4d3b792bcb4d8d2725\"" Sep 4 16:24:33.045616 containerd[1592]: time="2025-09-04T16:24:33.045593161Z" level=info msg="connecting to shim 0470a3e992113cc75412a802dae0649a4a042a3d9f4abd4d3b792bcb4d8d2725" address="unix:///run/containerd/s/2fd7e8231fe5b9eda275f6c454ec3e4c19fb39e7d3265b3be29f4b25dd112b25" protocol=ttrpc version=3 Sep 4 16:24:33.077002 systemd[1]: Started cri-containerd-0470a3e992113cc75412a802dae0649a4a042a3d9f4abd4d3b792bcb4d8d2725.scope - libcontainer container 0470a3e992113cc75412a802dae0649a4a042a3d9f4abd4d3b792bcb4d8d2725. Sep 4 16:24:33.121006 containerd[1592]: time="2025-09-04T16:24:33.120964535Z" level=info msg="StartContainer for \"0470a3e992113cc75412a802dae0649a4a042a3d9f4abd4d3b792bcb4d8d2725\" returns successfully" Sep 4 16:24:35.526080 containerd[1592]: time="2025-09-04T16:24:35.526020376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:35.526783 containerd[1592]: time="2025-09-04T16:24:35.526763087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 16:24:35.528023 containerd[1592]: time="2025-09-04T16:24:35.527969741Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:35.530159 containerd[1592]: time="2025-09-04T16:24:35.530125713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:35.530696 containerd[1592]: time="2025-09-04T16:24:35.530657737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.503946906s" Sep 4 16:24:35.530735 containerd[1592]: time="2025-09-04T16:24:35.530695921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 16:24:35.531668 containerd[1592]: time="2025-09-04T16:24:35.531603479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 16:24:35.532944 containerd[1592]: time="2025-09-04T16:24:35.532920134Z" level=info msg="CreateContainer within sandbox \"4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 16:24:35.548538 containerd[1592]: time="2025-09-04T16:24:35.548485441Z" level=info msg="Container 2a98a2e67f87246f6dcfa7a3a5f54c4bfe3fd7cc2086421d07a583dd0666355a: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:35.557324 containerd[1592]: time="2025-09-04T16:24:35.557285969Z" level=info msg="CreateContainer within sandbox \"4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2a98a2e67f87246f6dcfa7a3a5f54c4bfe3fd7cc2086421d07a583dd0666355a\"" Sep 4 16:24:35.559737 containerd[1592]: time="2025-09-04T16:24:35.557840147Z" level=info msg="StartContainer for \"2a98a2e67f87246f6dcfa7a3a5f54c4bfe3fd7cc2086421d07a583dd0666355a\"" Sep 4 16:24:35.559737 containerd[1592]: time="2025-09-04T16:24:35.559154108Z" level=info msg="connecting to shim 2a98a2e67f87246f6dcfa7a3a5f54c4bfe3fd7cc2086421d07a583dd0666355a" address="unix:///run/containerd/s/6fb45b5063348389a3c57f4ed95646752bdf98618eb76fb2c4dbedace1283565" protocol=ttrpc version=3 Sep 4 16:24:35.587013 systemd[1]: Started cri-containerd-2a98a2e67f87246f6dcfa7a3a5f54c4bfe3fd7cc2086421d07a583dd0666355a.scope - libcontainer container 2a98a2e67f87246f6dcfa7a3a5f54c4bfe3fd7cc2086421d07a583dd0666355a. Sep 4 16:24:35.634664 containerd[1592]: time="2025-09-04T16:24:35.634613522Z" level=info msg="StartContainer for \"2a98a2e67f87246f6dcfa7a3a5f54c4bfe3fd7cc2086421d07a583dd0666355a\" returns successfully" Sep 4 16:24:36.075468 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:59450.service - OpenSSH per-connection server daemon (10.0.0.1:59450). Sep 4 16:24:36.133782 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 59450 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:36.135616 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:36.140383 systemd-logind[1574]: New session 14 of user core. Sep 4 16:24:36.147994 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 16:24:36.263517 sshd[5066]: Connection closed by 10.0.0.1 port 59450 Sep 4 16:24:36.263811 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:36.268024 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:59450.service: Deactivated successfully. Sep 4 16:24:36.270170 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 16:24:36.270934 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Sep 4 16:24:36.272275 systemd-logind[1574]: Removed session 14. Sep 4 16:24:40.314650 containerd[1592]: time="2025-09-04T16:24:40.314586294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:40.315393 containerd[1592]: time="2025-09-04T16:24:40.315325773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 16:24:40.317802 containerd[1592]: time="2025-09-04T16:24:40.317435542Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:40.320217 containerd[1592]: time="2025-09-04T16:24:40.320156615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:40.320793 containerd[1592]: time="2025-09-04T16:24:40.320755675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.789119222s" Sep 4 16:24:40.320793 containerd[1592]: time="2025-09-04T16:24:40.320791363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 16:24:40.322377 containerd[1592]: time="2025-09-04T16:24:40.322341107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 16:24:40.324015 containerd[1592]: time="2025-09-04T16:24:40.323959995Z" level=info msg="CreateContainer within sandbox \"bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 16:24:40.332556 containerd[1592]: time="2025-09-04T16:24:40.332508453Z" level=info msg="Container 297b6a0db88b7d307abdfd960548af54239edd31565602970869f4caf0dcc5ff: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:40.341969 containerd[1592]: time="2025-09-04T16:24:40.341922312Z" level=info msg="CreateContainer within sandbox \"bf615991abb52c5e9baff6b68ecb4c8dbf2f2cf8c4e532e146039710b66d0a1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"297b6a0db88b7d307abdfd960548af54239edd31565602970869f4caf0dcc5ff\"" Sep 4 16:24:40.342540 containerd[1592]: time="2025-09-04T16:24:40.342504018Z" level=info msg="StartContainer for \"297b6a0db88b7d307abdfd960548af54239edd31565602970869f4caf0dcc5ff\"" Sep 4 16:24:40.346382 containerd[1592]: time="2025-09-04T16:24:40.346324942Z" level=info msg="connecting to shim 297b6a0db88b7d307abdfd960548af54239edd31565602970869f4caf0dcc5ff" address="unix:///run/containerd/s/5ca1ed329830489175dd78b5de00ef409a70996c19c83cdebc472bcf565a3fc5" protocol=ttrpc version=3 Sep 4 16:24:40.373016 systemd[1]: Started cri-containerd-297b6a0db88b7d307abdfd960548af54239edd31565602970869f4caf0dcc5ff.scope - libcontainer container 297b6a0db88b7d307abdfd960548af54239edd31565602970869f4caf0dcc5ff. Sep 4 16:24:40.421419 containerd[1592]: time="2025-09-04T16:24:40.421277343Z" level=info msg="StartContainer for \"297b6a0db88b7d307abdfd960548af54239edd31565602970869f4caf0dcc5ff\" returns successfully" Sep 4 16:24:40.948542 kubelet[2755]: I0904 16:24:40.948477 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8495f75bd6-tmbsc" podStartSLOduration=38.110693345 podStartE2EDuration="51.948457177s" podCreationTimestamp="2025-09-04 16:23:49 +0000 UTC" firstStartedPulling="2025-09-04 16:24:26.48425657 +0000 UTC m=+55.856563872" lastFinishedPulling="2025-09-04 16:24:40.322020402 +0000 UTC m=+69.694327704" observedRunningTime="2025-09-04 16:24:40.948310234 +0000 UTC m=+70.320617556" watchObservedRunningTime="2025-09-04 16:24:40.948457177 +0000 UTC m=+70.320764479" Sep 4 16:24:41.278759 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:58094.service - OpenSSH per-connection server daemon (10.0.0.1:58094). Sep 4 16:24:41.363820 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 58094 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:41.365754 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:41.370889 systemd-logind[1574]: New session 15 of user core. Sep 4 16:24:41.384185 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 16:24:41.511257 sshd[5139]: Connection closed by 10.0.0.1 port 58094 Sep 4 16:24:41.511620 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:41.516911 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:58094.service: Deactivated successfully. Sep 4 16:24:41.519301 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 16:24:41.520224 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Sep 4 16:24:41.521515 systemd-logind[1574]: Removed session 15. Sep 4 16:24:41.944728 kubelet[2755]: I0904 16:24:41.944688 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:24:42.719602 kubelet[2755]: E0904 16:24:42.719240 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:43.238078 containerd[1592]: time="2025-09-04T16:24:43.238008830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:43.238834 containerd[1592]: time="2025-09-04T16:24:43.238779286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 16:24:43.240120 containerd[1592]: time="2025-09-04T16:24:43.240089737Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:43.242073 containerd[1592]: time="2025-09-04T16:24:43.242044492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:43.242728 containerd[1592]: time="2025-09-04T16:24:43.242694948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.920326659s" Sep 4 16:24:43.242728 containerd[1592]: time="2025-09-04T16:24:43.242726719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 16:24:43.243736 containerd[1592]: time="2025-09-04T16:24:43.243696417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 16:24:43.250646 containerd[1592]: time="2025-09-04T16:24:43.250602258Z" level=info msg="CreateContainer within sandbox \"f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 16:24:43.262496 containerd[1592]: time="2025-09-04T16:24:43.262446471Z" level=info msg="Container 1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:43.272716 containerd[1592]: time="2025-09-04T16:24:43.272670690Z" level=info msg="CreateContainer within sandbox \"f3ac028c3756f78c9153ede995f5332c7e2a7d7aa226089bf3b448e25c6c3c4a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0\"" Sep 4 16:24:43.273245 containerd[1592]: time="2025-09-04T16:24:43.273190506Z" level=info msg="StartContainer for \"1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0\"" Sep 4 16:24:43.274345 containerd[1592]: time="2025-09-04T16:24:43.274311203Z" level=info msg="connecting to shim 1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0" address="unix:///run/containerd/s/637b4a19013973fea90d1c82afd2e4354c286e9551f4bb430c9d4455c65ce047" protocol=ttrpc version=3 Sep 4 16:24:43.330101 systemd[1]: Started cri-containerd-1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0.scope - libcontainer container 1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0. Sep 4 16:24:43.386830 containerd[1592]: time="2025-09-04T16:24:43.386716375Z" level=info msg="StartContainer for \"1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0\" returns successfully" Sep 4 16:24:43.978587 kubelet[2755]: I0904 16:24:43.978518 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b587f946-s494h" podStartSLOduration=35.223060138 podStartE2EDuration="51.978500359s" podCreationTimestamp="2025-09-04 16:23:52 +0000 UTC" firstStartedPulling="2025-09-04 16:24:26.488099105 +0000 UTC m=+55.860406407" lastFinishedPulling="2025-09-04 16:24:43.243539325 +0000 UTC m=+72.615846628" observedRunningTime="2025-09-04 16:24:43.977053066 +0000 UTC m=+73.349360368" watchObservedRunningTime="2025-09-04 16:24:43.978500359 +0000 UTC m=+73.350807661" Sep 4 16:24:44.006510 containerd[1592]: time="2025-09-04T16:24:44.006460222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0\" id:\"912b109c66f2b817fed6c85be201ab80787c62190ce62936e046f250d74ba1c9\" pid:5214 exited_at:{seconds:1757003084 nanos:6063542}" Sep 4 16:24:46.536016 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:58098.service - OpenSSH per-connection server daemon (10.0.0.1:58098). Sep 4 16:24:46.614328 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 58098 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:46.616253 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:46.621563 systemd-logind[1574]: New session 16 of user core. Sep 4 16:24:46.631985 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 16:24:46.720889 kubelet[2755]: E0904 16:24:46.720022 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:46.749276 sshd[5234]: Connection closed by 10.0.0.1 port 58098 Sep 4 16:24:46.749622 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:46.754574 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:58098.service: Deactivated successfully. Sep 4 16:24:46.756876 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 16:24:46.757607 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Sep 4 16:24:46.758653 systemd-logind[1574]: Removed session 16. Sep 4 16:24:48.369095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171576194.mount: Deactivated successfully. Sep 4 16:24:48.920662 containerd[1592]: time="2025-09-04T16:24:48.920603791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:48.921319 containerd[1592]: time="2025-09-04T16:24:48.921255306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 16:24:48.922430 containerd[1592]: time="2025-09-04T16:24:48.922392620Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:48.924606 containerd[1592]: time="2025-09-04T16:24:48.924554270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:48.925141 containerd[1592]: time="2025-09-04T16:24:48.925096085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.681366405s" Sep 4 16:24:48.925141 containerd[1592]: time="2025-09-04T16:24:48.925135260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 16:24:48.927681 containerd[1592]: time="2025-09-04T16:24:48.927639306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 16:24:48.932505 containerd[1592]: time="2025-09-04T16:24:48.932477531Z" level=info msg="CreateContainer within sandbox \"20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 16:24:48.941369 containerd[1592]: time="2025-09-04T16:24:48.941337691Z" level=info msg="Container 4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:48.949692 containerd[1592]: time="2025-09-04T16:24:48.949662680Z" level=info msg="CreateContainer within sandbox \"20bad391079291056a270f10b9a87e87008c730774ee54450fe4a28a828a8d7c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385\"" Sep 4 16:24:48.950939 containerd[1592]: time="2025-09-04T16:24:48.950901106Z" level=info msg="StartContainer for \"4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385\"" Sep 4 16:24:48.951898 containerd[1592]: time="2025-09-04T16:24:48.951842616Z" level=info msg="connecting to shim 4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385" address="unix:///run/containerd/s/9a35ec07d30fc40b38826705d6ee01dd6bc5904ec11df4ff8d857c67393638b1" protocol=ttrpc version=3 Sep 4 16:24:48.984114 systemd[1]: Started cri-containerd-4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385.scope - libcontainer container 4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385. Sep 4 16:24:49.031301 containerd[1592]: time="2025-09-04T16:24:49.031235389Z" level=info msg="StartContainer for \"4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385\" returns successfully" Sep 4 16:24:49.093237 kubelet[2755]: I0904 16:24:49.093165 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-rcsbj" podStartSLOduration=35.730192552 podStartE2EDuration="58.093135919s" podCreationTimestamp="2025-09-04 16:23:51 +0000 UTC" firstStartedPulling="2025-09-04 16:24:26.564411334 +0000 UTC m=+55.936718636" lastFinishedPulling="2025-09-04 16:24:48.927354701 +0000 UTC m=+78.299662003" observedRunningTime="2025-09-04 16:24:49.089333567 +0000 UTC m=+78.461640869" watchObservedRunningTime="2025-09-04 16:24:49.093135919 +0000 UTC m=+78.465443221" Sep 4 16:24:49.154158 containerd[1592]: time="2025-09-04T16:24:49.154111342Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385\" id:\"3edb3ee260b1d008c4d61c90195ecffa9a322a2ad316f2490d50df86455377cc\" pid:5307 exit_status:1 exited_at:{seconds:1757003089 nanos:153590607}" Sep 4 16:24:49.435163 containerd[1592]: time="2025-09-04T16:24:49.435100039Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:49.435835 containerd[1592]: time="2025-09-04T16:24:49.435779586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 16:24:49.437560 containerd[1592]: time="2025-09-04T16:24:49.437502587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 509.828315ms" Sep 4 16:24:49.437560 containerd[1592]: time="2025-09-04T16:24:49.437539026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 16:24:49.438856 containerd[1592]: time="2025-09-04T16:24:49.438572100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 16:24:49.440356 containerd[1592]: time="2025-09-04T16:24:49.440250405Z" level=info msg="CreateContainer within sandbox \"8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 16:24:49.450930 containerd[1592]: time="2025-09-04T16:24:49.448344645Z" level=info msg="Container fde05abd439d58ce78b8667c7bdc247d25f8d17167ce3e8de054f41ce9906147: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:49.456176 containerd[1592]: time="2025-09-04T16:24:49.456130526Z" level=info msg="CreateContainer within sandbox \"8a7fa2b0b29e9b893b3368a74485c57aaac8a1fb6785976b6d453bb60b0b8729\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fde05abd439d58ce78b8667c7bdc247d25f8d17167ce3e8de054f41ce9906147\"" Sep 4 16:24:49.456643 containerd[1592]: time="2025-09-04T16:24:49.456615252Z" level=info msg="StartContainer for \"fde05abd439d58ce78b8667c7bdc247d25f8d17167ce3e8de054f41ce9906147\"" Sep 4 16:24:49.457608 containerd[1592]: time="2025-09-04T16:24:49.457568272Z" level=info msg="connecting to shim fde05abd439d58ce78b8667c7bdc247d25f8d17167ce3e8de054f41ce9906147" address="unix:///run/containerd/s/ea57718aba5c2cf01f05984258936e84cdb4aac4c5901960abac77909e227d59" protocol=ttrpc version=3 Sep 4 16:24:49.480001 systemd[1]: Started cri-containerd-fde05abd439d58ce78b8667c7bdc247d25f8d17167ce3e8de054f41ce9906147.scope - libcontainer container fde05abd439d58ce78b8667c7bdc247d25f8d17167ce3e8de054f41ce9906147. Sep 4 16:24:49.529988 containerd[1592]: time="2025-09-04T16:24:49.529886279Z" level=info msg="StartContainer for \"fde05abd439d58ce78b8667c7bdc247d25f8d17167ce3e8de054f41ce9906147\" returns successfully" Sep 4 16:24:50.097774 kubelet[2755]: I0904 16:24:50.097706 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8495f75bd6-58hqg" podStartSLOduration=38.740948868 podStartE2EDuration="1m1.097683722s" podCreationTimestamp="2025-09-04 16:23:49 +0000 UTC" firstStartedPulling="2025-09-04 16:24:27.081642714 +0000 UTC m=+56.453950016" lastFinishedPulling="2025-09-04 16:24:49.438377568 +0000 UTC m=+78.810684870" observedRunningTime="2025-09-04 16:24:50.09721141 +0000 UTC m=+79.469518712" watchObservedRunningTime="2025-09-04 16:24:50.097683722 +0000 UTC m=+79.469991024" Sep 4 16:24:50.176121 containerd[1592]: time="2025-09-04T16:24:50.176039771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385\" id:\"d8725dca8a436b99ee0c5ac04e32538998d28b248e9c8167029f247e234c9145\" pid:5369 exit_status:1 exited_at:{seconds:1757003090 nanos:175678421}" Sep 4 16:24:51.082533 kubelet[2755]: I0904 16:24:51.082494 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:24:51.762578 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:54362.service - OpenSSH per-connection server daemon (10.0.0.1:54362). Sep 4 16:24:51.853564 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 54362 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:51.854016 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:51.859132 systemd-logind[1574]: New session 17 of user core. Sep 4 16:24:51.864124 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 16:24:52.017994 sshd[5393]: Connection closed by 10.0.0.1 port 54362 Sep 4 16:24:52.018474 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:52.025710 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:54362.service: Deactivated successfully. Sep 4 16:24:52.028125 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 16:24:52.030860 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Sep 4 16:24:52.032078 systemd-logind[1574]: Removed session 17. Sep 4 16:24:52.223450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147755910.mount: Deactivated successfully. Sep 4 16:24:52.719467 kubelet[2755]: E0904 16:24:52.719376 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:24:52.783037 containerd[1592]: time="2025-09-04T16:24:52.782969445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:52.784926 containerd[1592]: time="2025-09-04T16:24:52.784901129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 16:24:52.786087 containerd[1592]: time="2025-09-04T16:24:52.785999886Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:52.789185 containerd[1592]: time="2025-09-04T16:24:52.789137520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:52.790152 containerd[1592]: time="2025-09-04T16:24:52.789911616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.351307445s" Sep 4 16:24:52.790152 containerd[1592]: time="2025-09-04T16:24:52.789969146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 16:24:52.791495 containerd[1592]: time="2025-09-04T16:24:52.791478686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 16:24:52.794915 containerd[1592]: time="2025-09-04T16:24:52.794890523Z" level=info msg="CreateContainer within sandbox \"72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 16:24:52.804822 containerd[1592]: time="2025-09-04T16:24:52.804072567Z" level=info msg="Container 84d4c52691db667d9cd031ae7eb2c6e75f75ed8edb6fa12e6d5f4a227c4be479: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:52.813824 containerd[1592]: time="2025-09-04T16:24:52.813790163Z" level=info msg="CreateContainer within sandbox \"72515b9d6ac2b9dc336cb966df179b04bd9c227655192f7c0b0ac66efbc3ae6a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"84d4c52691db667d9cd031ae7eb2c6e75f75ed8edb6fa12e6d5f4a227c4be479\"" Sep 4 16:24:52.814881 containerd[1592]: time="2025-09-04T16:24:52.814818644Z" level=info msg="StartContainer for \"84d4c52691db667d9cd031ae7eb2c6e75f75ed8edb6fa12e6d5f4a227c4be479\"" Sep 4 16:24:52.815828 containerd[1592]: time="2025-09-04T16:24:52.815798564Z" level=info msg="connecting to shim 84d4c52691db667d9cd031ae7eb2c6e75f75ed8edb6fa12e6d5f4a227c4be479" address="unix:///run/containerd/s/2fd7e8231fe5b9eda275f6c454ec3e4c19fb39e7d3265b3be29f4b25dd112b25" protocol=ttrpc version=3 Sep 4 16:24:52.847054 systemd[1]: Started cri-containerd-84d4c52691db667d9cd031ae7eb2c6e75f75ed8edb6fa12e6d5f4a227c4be479.scope - libcontainer container 84d4c52691db667d9cd031ae7eb2c6e75f75ed8edb6fa12e6d5f4a227c4be479. Sep 4 16:24:52.915062 containerd[1592]: time="2025-09-04T16:24:52.914991035Z" level=info msg="StartContainer for \"84d4c52691db667d9cd031ae7eb2c6e75f75ed8edb6fa12e6d5f4a227c4be479\" returns successfully" Sep 4 16:24:54.953027 containerd[1592]: time="2025-09-04T16:24:54.952966396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5933388e8a5c5ecda2c1039e7e8d5c7c685e75217e0328e016d41826fd359e1\" id:\"d20d17de0d3c8bdba8677c18048a03001d07bd660436fbafd7ec473503289615\" pid:5460 exited_at:{seconds:1757003094 nanos:952628583}" Sep 4 16:24:54.986566 kubelet[2755]: I0904 16:24:54.986256 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cd8b6f4f4-fdk6g" podStartSLOduration=4.291540417 podStartE2EDuration="32.986232767s" podCreationTimestamp="2025-09-04 16:24:22 +0000 UTC" firstStartedPulling="2025-09-04 16:24:24.096631481 +0000 UTC m=+53.468938783" lastFinishedPulling="2025-09-04 16:24:52.791323831 +0000 UTC m=+82.163631133" observedRunningTime="2025-09-04 16:24:53.115404675 +0000 UTC m=+82.487711998" watchObservedRunningTime="2025-09-04 16:24:54.986232767 +0000 UTC m=+84.358540069" Sep 4 16:24:56.692189 containerd[1592]: time="2025-09-04T16:24:56.692133929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:56.692784 containerd[1592]: time="2025-09-04T16:24:56.692743219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 16:24:56.693914 containerd[1592]: time="2025-09-04T16:24:56.693893139Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:56.695823 containerd[1592]: time="2025-09-04T16:24:56.695794910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 16:24:56.696423 containerd[1592]: time="2025-09-04T16:24:56.696363684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.904782743s" Sep 4 16:24:56.696423 containerd[1592]: time="2025-09-04T16:24:56.696405163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 16:24:56.698672 containerd[1592]: time="2025-09-04T16:24:56.698636973Z" level=info msg="CreateContainer within sandbox \"4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 16:24:56.706440 containerd[1592]: time="2025-09-04T16:24:56.706401088Z" level=info msg="Container e6bfbb305d4040550dec8fc6533e6f222e4781ff17b9c5d00d88dc5d10cc5e7b: CDI devices from CRI Config.CDIDevices: []" Sep 4 16:24:56.727517 containerd[1592]: time="2025-09-04T16:24:56.727386542Z" level=info msg="CreateContainer within sandbox \"4362b07a6429c6a7fde6fd6a1d98c38b6863372692b7f76c54e2386ec0ff67f0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e6bfbb305d4040550dec8fc6533e6f222e4781ff17b9c5d00d88dc5d10cc5e7b\"" Sep 4 16:24:56.728401 containerd[1592]: time="2025-09-04T16:24:56.728361989Z" level=info msg="StartContainer for \"e6bfbb305d4040550dec8fc6533e6f222e4781ff17b9c5d00d88dc5d10cc5e7b\"" Sep 4 16:24:56.730122 containerd[1592]: time="2025-09-04T16:24:56.730091312Z" level=info msg="connecting to shim e6bfbb305d4040550dec8fc6533e6f222e4781ff17b9c5d00d88dc5d10cc5e7b" address="unix:///run/containerd/s/6fb45b5063348389a3c57f4ed95646752bdf98618eb76fb2c4dbedace1283565" protocol=ttrpc version=3 Sep 4 16:24:56.760000 systemd[1]: Started cri-containerd-e6bfbb305d4040550dec8fc6533e6f222e4781ff17b9c5d00d88dc5d10cc5e7b.scope - libcontainer container e6bfbb305d4040550dec8fc6533e6f222e4781ff17b9c5d00d88dc5d10cc5e7b. Sep 4 16:24:56.804068 containerd[1592]: time="2025-09-04T16:24:56.804017093Z" level=info msg="StartContainer for \"e6bfbb305d4040550dec8fc6533e6f222e4781ff17b9c5d00d88dc5d10cc5e7b\" returns successfully" Sep 4 16:24:57.034691 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:54364.service - OpenSSH per-connection server daemon (10.0.0.1:54364). Sep 4 16:24:57.099879 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 54364 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:57.101452 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:57.105801 systemd-logind[1574]: New session 18 of user core. Sep 4 16:24:57.114008 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 16:24:57.143805 kubelet[2755]: I0904 16:24:57.143590 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6tl25" podStartSLOduration=35.640898893 podStartE2EDuration="1m6.143573802s" podCreationTimestamp="2025-09-04 16:23:51 +0000 UTC" firstStartedPulling="2025-09-04 16:24:26.194486534 +0000 UTC m=+55.566793826" lastFinishedPulling="2025-09-04 16:24:56.697161433 +0000 UTC m=+86.069468735" observedRunningTime="2025-09-04 16:24:57.142822222 +0000 UTC m=+86.515129524" watchObservedRunningTime="2025-09-04 16:24:57.143573802 +0000 UTC m=+86.515881104" Sep 4 16:24:57.260753 sshd[5513]: Connection closed by 10.0.0.1 port 54364 Sep 4 16:24:57.261111 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:57.274724 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:54364.service: Deactivated successfully. Sep 4 16:24:57.276797 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 16:24:57.277627 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Sep 4 16:24:57.280504 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:54380.service - OpenSSH per-connection server daemon (10.0.0.1:54380). Sep 4 16:24:57.281677 systemd-logind[1574]: Removed session 18. Sep 4 16:24:57.342356 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 54380 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:57.344176 sshd-session[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:57.348748 systemd-logind[1574]: New session 19 of user core. Sep 4 16:24:57.355014 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 16:24:57.574082 sshd[5529]: Connection closed by 10.0.0.1 port 54380 Sep 4 16:24:57.574335 sshd-session[5526]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:57.592644 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:54380.service: Deactivated successfully. Sep 4 16:24:57.594774 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 16:24:57.595663 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Sep 4 16:24:57.598237 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:54388.service - OpenSSH per-connection server daemon (10.0.0.1:54388). Sep 4 16:24:57.599005 systemd-logind[1574]: Removed session 19. Sep 4 16:24:57.655299 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 54388 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:57.656610 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:57.660971 systemd-logind[1574]: New session 20 of user core. Sep 4 16:24:57.674982 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 16:24:57.814619 kubelet[2755]: I0904 16:24:57.814569 2755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 16:24:57.814619 kubelet[2755]: I0904 16:24:57.814613 2755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 16:24:58.352004 sshd[5543]: Connection closed by 10.0.0.1 port 54388 Sep 4 16:24:58.354265 sshd-session[5540]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:58.371019 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:54388.service: Deactivated successfully. Sep 4 16:24:58.373889 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 16:24:58.376077 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Sep 4 16:24:58.380087 systemd[1]: Started sshd@21-10.0.0.77:22-10.0.0.1:54396.service - OpenSSH per-connection server daemon (10.0.0.1:54396). Sep 4 16:24:58.381194 systemd-logind[1574]: Removed session 20. Sep 4 16:24:58.438130 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 54396 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:58.439448 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:58.443596 systemd-logind[1574]: New session 21 of user core. Sep 4 16:24:58.455987 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 16:24:58.705468 sshd[5565]: Connection closed by 10.0.0.1 port 54396 Sep 4 16:24:58.707567 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:58.718297 systemd[1]: sshd@21-10.0.0.77:22-10.0.0.1:54396.service: Deactivated successfully. Sep 4 16:24:58.722371 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 16:24:58.723919 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Sep 4 16:24:58.727511 systemd[1]: Started sshd@22-10.0.0.77:22-10.0.0.1:54402.service - OpenSSH per-connection server daemon (10.0.0.1:54402). Sep 4 16:24:58.728137 systemd-logind[1574]: Removed session 21. Sep 4 16:24:58.790507 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 54402 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:24:58.792319 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:24:58.797019 systemd-logind[1574]: New session 22 of user core. Sep 4 16:24:58.807006 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 16:24:58.916378 sshd[5580]: Connection closed by 10.0.0.1 port 54402 Sep 4 16:24:58.916701 sshd-session[5577]: pam_unix(sshd:session): session closed for user core Sep 4 16:24:58.921992 systemd[1]: sshd@22-10.0.0.77:22-10.0.0.1:54402.service: Deactivated successfully. Sep 4 16:24:58.924097 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 16:24:58.924979 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Sep 4 16:24:58.926430 systemd-logind[1574]: Removed session 22. Sep 4 16:25:00.719574 kubelet[2755]: E0904 16:25:00.719512 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 16:25:03.936903 systemd[1]: Started sshd@23-10.0.0.77:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). Sep 4 16:25:03.992596 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:25:03.994117 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:25:03.998653 systemd-logind[1574]: New session 23 of user core. Sep 4 16:25:04.009022 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 16:25:04.127538 sshd[5600]: Connection closed by 10.0.0.1 port 35604 Sep 4 16:25:04.127913 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Sep 4 16:25:04.133057 systemd[1]: sshd@23-10.0.0.77:22-10.0.0.1:35604.service: Deactivated successfully. Sep 4 16:25:04.135177 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 16:25:04.136093 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Sep 4 16:25:04.137210 systemd-logind[1574]: Removed session 23. Sep 4 16:25:04.585431 kubelet[2755]: I0904 16:25:04.585367 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:25:05.949646 kubelet[2755]: I0904 16:25:05.949543 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 16:25:09.147044 systemd[1]: Started sshd@24-10.0.0.77:22-10.0.0.1:35614.service - OpenSSH per-connection server daemon (10.0.0.1:35614). Sep 4 16:25:09.232331 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 35614 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:25:09.234246 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:25:09.238838 systemd-logind[1574]: New session 24 of user core. Sep 4 16:25:09.247015 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 16:25:09.431321 sshd[5632]: Connection closed by 10.0.0.1 port 35614 Sep 4 16:25:09.431624 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Sep 4 16:25:09.436147 systemd[1]: sshd@24-10.0.0.77:22-10.0.0.1:35614.service: Deactivated successfully. Sep 4 16:25:09.438242 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 16:25:09.440035 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Sep 4 16:25:09.441480 systemd-logind[1574]: Removed session 24. Sep 4 16:25:14.034786 containerd[1592]: time="2025-09-04T16:25:14.034715843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1526dd67b97dfae641039ec30a01220154afab122687803b42fb2b2679e93bf0\" id:\"d6f09caebcba5f4f1a09d4a6cb72c9028482270102122f2ab62abd33cb24c2b6\" pid:5658 exited_at:{seconds:1757003114 nanos:9084796}" Sep 4 16:25:14.450758 systemd[1]: Started sshd@25-10.0.0.77:22-10.0.0.1:34332.service - OpenSSH per-connection server daemon (10.0.0.1:34332). Sep 4 16:25:14.513081 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 34332 ssh2: RSA SHA256:Gi3V+rcn3j++vbR/HcfmcMqdfV/BOCBT7R1vPF/QTTY Sep 4 16:25:14.514405 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 16:25:14.518315 systemd-logind[1574]: New session 25 of user core. Sep 4 16:25:14.528989 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 16:25:15.126206 sshd[5672]: Connection closed by 10.0.0.1 port 34332 Sep 4 16:25:15.126597 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Sep 4 16:25:15.131586 systemd[1]: sshd@25-10.0.0.77:22-10.0.0.1:34332.service: Deactivated successfully. Sep 4 16:25:15.133811 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 16:25:15.134665 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Sep 4 16:25:15.136366 systemd-logind[1574]: Removed session 25. Sep 4 16:25:15.792031 containerd[1592]: time="2025-09-04T16:25:15.791983233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4cd3c1da5d4469fbe6e7874de2a86024a0a7f87fc48aa61d70c394c8680ea385\" id:\"c539a10a2b3882fd4c1529cc05b8fefc3c9995b87b428caf76e485e9ef8688eb\" pid:5696 exited_at:{seconds:1757003115 nanos:791647276}"