Sep 5 00:36:55.842833 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:12:48 -00 2025 Sep 5 00:36:55.842855 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:36:55.842867 kernel: BIOS-provided physical RAM map: Sep 5 00:36:55.842874 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:36:55.842891 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:36:55.842898 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:36:55.842906 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:36:55.842922 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:36:55.842931 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 5 00:36:55.842941 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 5 00:36:55.842947 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 5 00:36:55.842954 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 5 00:36:55.842961 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 5 00:36:55.842967 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 5 00:36:55.842980 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 5 00:36:55.842990 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:36:55.843000 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 5 00:36:55.843007 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 5 00:36:55.843014 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 5 00:36:55.843021 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 5 00:36:55.843028 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 5 00:36:55.843035 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:36:55.843042 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:36:55.843049 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:36:55.843056 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 5 00:36:55.843065 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:36:55.843072 kernel: NX (Execute Disable) protection: active Sep 5 00:36:55.843079 kernel: APIC: Static calls initialized Sep 5 00:36:55.843086 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 5 00:36:55.843094 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 5 00:36:55.843101 kernel: extended physical RAM map: Sep 5 00:36:55.843108 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:36:55.843115 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:36:55.843122 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:36:55.843129 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:36:55.843136 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:36:55.843145 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 5 00:36:55.843152 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 5 00:36:55.843160 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 5 00:36:55.843167 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 5 00:36:55.843177 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 5 00:36:55.843185 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 5 00:36:55.843195 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 5 00:36:55.843202 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 5 00:36:55.843210 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 5 00:36:55.843217 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 5 00:36:55.843224 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 5 00:36:55.843232 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:36:55.843239 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 5 00:36:55.843247 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 5 00:36:55.843254 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 5 00:36:55.843261 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 5 00:36:55.843271 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 5 00:36:55.843278 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:36:55.843286 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:36:55.843293 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:36:55.843300 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 5 00:36:55.843307 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:36:55.843317 kernel: efi: EFI v2.7 by EDK II Sep 5 00:36:55.843324 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 5 00:36:55.843332 kernel: random: crng init done Sep 5 00:36:55.843341 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 5 00:36:55.843349 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 5 00:36:55.843361 kernel: secureboot: Secure boot disabled Sep 5 00:36:55.843368 kernel: SMBIOS 2.8 present. Sep 5 00:36:55.843376 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 5 00:36:55.843383 kernel: DMI: Memory slots populated: 1/1 Sep 5 00:36:55.843391 kernel: Hypervisor detected: KVM Sep 5 00:36:55.843400 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:36:55.843408 kernel: kvm-clock: using sched offset of 6222220798 cycles Sep 5 00:36:55.843416 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:36:55.843426 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:36:55.843434 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:36:55.843443 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:36:55.843454 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 5 00:36:55.843462 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 5 00:36:55.843470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:36:55.843477 kernel: Using GB pages for direct mapping Sep 5 00:36:55.843485 kernel: ACPI: Early table checksum verification disabled Sep 5 00:36:55.843493 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 5 00:36:55.843500 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:36:55.843508 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:55.843516 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:55.843525 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 5 00:36:55.843533 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:55.843541 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:55.843548 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:55.843556 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:36:55.843564 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:36:55.843571 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 5 00:36:55.843579 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 5 00:36:55.843586 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 5 00:36:55.843596 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 5 00:36:55.843604 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 5 00:36:55.843611 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 5 00:36:55.843619 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 5 00:36:55.843626 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 5 00:36:55.843634 kernel: No NUMA configuration found Sep 5 00:36:55.843641 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 5 00:36:55.843649 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 5 00:36:55.843671 kernel: Zone ranges: Sep 5 00:36:55.843681 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:36:55.843689 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 5 00:36:55.843696 kernel: Normal empty Sep 5 00:36:55.843704 kernel: Device empty Sep 5 00:36:55.843711 kernel: Movable zone start for each node Sep 5 00:36:55.843719 kernel: Early memory node ranges Sep 5 00:36:55.843727 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 5 00:36:55.843734 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 5 00:36:55.843744 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 5 00:36:55.843754 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 5 00:36:55.843761 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 5 00:36:55.843769 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 5 00:36:55.843777 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 5 00:36:55.843784 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 5 00:36:55.843792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 5 00:36:55.843800 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:36:55.843816 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 5 00:36:55.843832 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 5 00:36:55.843840 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:36:55.843848 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 5 00:36:55.843856 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 5 00:36:55.843866 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 5 00:36:55.843874 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 5 00:36:55.843882 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 5 00:36:55.843890 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:36:55.843898 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:36:55.843908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:36:55.843917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:36:55.843925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:36:55.843933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:36:55.843941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:36:55.843948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:36:55.843956 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:36:55.843964 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:36:55.843972 kernel: TSC deadline timer available Sep 5 00:36:55.843982 kernel: CPU topo: Max. logical packages: 1 Sep 5 00:36:55.843990 kernel: CPU topo: Max. logical dies: 1 Sep 5 00:36:55.843998 kernel: CPU topo: Max. dies per package: 1 Sep 5 00:36:55.844005 kernel: CPU topo: Max. threads per core: 1 Sep 5 00:36:55.844013 kernel: CPU topo: Num. cores per package: 4 Sep 5 00:36:55.844021 kernel: CPU topo: Num. threads per package: 4 Sep 5 00:36:55.844029 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 5 00:36:55.844037 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:36:55.844044 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:36:55.844055 kernel: kvm-guest: setup PV sched yield Sep 5 00:36:55.844063 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 5 00:36:55.844070 kernel: Booting paravirtualized kernel on KVM Sep 5 00:36:55.844079 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:36:55.844087 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:36:55.844095 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 5 00:36:55.844103 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 5 00:36:55.844111 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:36:55.844119 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:36:55.844129 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:36:55.844138 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:36:55.844149 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:36:55.844157 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:36:55.844165 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:36:55.844173 kernel: Fallback order for Node 0: 0 Sep 5 00:36:55.844181 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 5 00:36:55.844189 kernel: Policy zone: DMA32 Sep 5 00:36:55.844199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:36:55.844207 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:36:55.844215 kernel: ftrace: allocating 40102 entries in 157 pages Sep 5 00:36:55.844222 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 00:36:55.844230 kernel: Dynamic Preempt: voluntary Sep 5 00:36:55.844238 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:36:55.844247 kernel: rcu: RCU event tracing is enabled. Sep 5 00:36:55.844255 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:36:55.844263 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:36:55.844274 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:36:55.844282 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:36:55.844290 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:36:55.844300 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:36:55.844308 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:55.844316 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:55.844324 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:36:55.844332 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:36:55.844340 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:36:55.844350 kernel: Console: colour dummy device 80x25 Sep 5 00:36:55.844358 kernel: printk: legacy console [ttyS0] enabled Sep 5 00:36:55.844366 kernel: ACPI: Core revision 20240827 Sep 5 00:36:55.844374 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:36:55.844382 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:36:55.844390 kernel: x2apic enabled Sep 5 00:36:55.844398 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:36:55.844406 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:36:55.844414 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:36:55.844422 kernel: kvm-guest: setup PV IPIs Sep 5 00:36:55.844432 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:36:55.844440 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:36:55.844448 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:36:55.844456 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:36:55.844464 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:36:55.844471 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:36:55.844479 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:36:55.844487 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:36:55.844497 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:36:55.844505 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:36:55.844513 kernel: active return thunk: retbleed_return_thunk Sep 5 00:36:55.844521 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:36:55.844532 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:36:55.844540 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:36:55.844548 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:36:55.844557 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:36:55.844565 kernel: active return thunk: srso_return_thunk Sep 5 00:36:55.844575 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:36:55.844583 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:36:55.844591 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:36:55.844599 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:36:55.844606 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:36:55.844615 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:36:55.844623 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:36:55.844630 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:36:55.844638 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 00:36:55.844648 kernel: landlock: Up and running. Sep 5 00:36:55.844668 kernel: SELinux: Initializing. Sep 5 00:36:55.844676 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:36:55.844684 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:36:55.844692 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:36:55.844700 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:36:55.844708 kernel: ... version: 0 Sep 5 00:36:55.844716 kernel: ... bit width: 48 Sep 5 00:36:55.844723 kernel: ... generic registers: 6 Sep 5 00:36:55.844734 kernel: ... value mask: 0000ffffffffffff Sep 5 00:36:55.844742 kernel: ... max period: 00007fffffffffff Sep 5 00:36:55.844750 kernel: ... fixed-purpose events: 0 Sep 5 00:36:55.844757 kernel: ... event mask: 000000000000003f Sep 5 00:36:55.844765 kernel: signal: max sigframe size: 1776 Sep 5 00:36:55.844773 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:36:55.844781 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:36:55.844792 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 00:36:55.844800 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:36:55.844817 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:36:55.844825 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:36:55.844832 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:36:55.844841 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:36:55.844849 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2428K rwdata, 9956K rodata, 54044K init, 2924K bss, 137196K reserved, 0K cma-reserved) Sep 5 00:36:55.844857 kernel: devtmpfs: initialized Sep 5 00:36:55.844865 kernel: x86/mm: Memory block size: 128MB Sep 5 00:36:55.844873 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 5 00:36:55.844881 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 5 00:36:55.844891 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 5 00:36:55.844899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 5 00:36:55.844907 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 5 00:36:55.844915 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 5 00:36:55.844923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:36:55.844931 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:36:55.844939 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:36:55.844946 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:36:55.844957 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:36:55.844965 kernel: audit: type=2000 audit(1757032612.739:1): state=initialized audit_enabled=0 res=1 Sep 5 00:36:55.844973 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:36:55.844981 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:36:55.844988 kernel: cpuidle: using governor menu Sep 5 00:36:55.844996 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:36:55.845004 kernel: dca service started, version 1.12.1 Sep 5 00:36:55.845012 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 5 00:36:55.845020 kernel: PCI: Using configuration type 1 for base access Sep 5 00:36:55.845030 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:36:55.845038 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:36:55.845046 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:36:55.845054 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:36:55.845062 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:36:55.845070 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:36:55.845078 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:36:55.845086 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:36:55.845094 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:36:55.845104 kernel: ACPI: Interpreter enabled Sep 5 00:36:55.845112 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:36:55.845120 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:36:55.845128 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:36:55.845136 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:36:55.845144 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:36:55.845152 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:36:55.845358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:36:55.845490 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:36:55.845612 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:36:55.845623 kernel: PCI host bridge to bus 0000:00 Sep 5 00:36:55.845851 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:36:55.845969 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:36:55.846080 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:36:55.846195 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 5 00:36:55.846312 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 5 00:36:55.846439 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 5 00:36:55.846552 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:36:55.846728 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 00:36:55.846907 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 5 00:36:55.847031 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 5 00:36:55.847150 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 5 00:36:55.847275 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 5 00:36:55.847405 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:36:55.847550 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 00:36:55.847999 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 5 00:36:55.848134 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 5 00:36:55.848255 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 5 00:36:55.848395 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 00:36:55.848524 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 5 00:36:55.848645 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 5 00:36:55.849330 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 5 00:36:55.849482 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 00:36:55.849606 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 5 00:36:55.849751 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 5 00:36:55.849888 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 5 00:36:55.850009 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 5 00:36:55.850143 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 00:36:55.850264 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:36:55.850418 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 00:36:55.850543 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 5 00:36:55.850703 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 5 00:36:55.850862 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 00:36:55.850986 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 5 00:36:55.850997 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:36:55.851005 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:36:55.851013 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:36:55.851021 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:36:55.851029 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:36:55.851037 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:36:55.851048 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:36:55.851056 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:36:55.851064 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:36:55.851072 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:36:55.851080 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:36:55.851088 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:36:55.851095 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:36:55.851103 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:36:55.851111 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:36:55.851121 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:36:55.851129 kernel: iommu: Default domain type: Translated Sep 5 00:36:55.851137 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:36:55.851145 kernel: efivars: Registered efivars operations Sep 5 00:36:55.851153 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:36:55.851161 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:36:55.851169 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 5 00:36:55.851177 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 5 00:36:55.851184 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 5 00:36:55.851194 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 5 00:36:55.851202 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 5 00:36:55.851210 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 5 00:36:55.851218 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 5 00:36:55.851226 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 5 00:36:55.851348 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:36:55.851468 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:36:55.851587 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:36:55.851601 kernel: vgaarb: loaded Sep 5 00:36:55.851609 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:36:55.851617 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:36:55.851625 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:36:55.851633 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:36:55.851641 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:36:55.851649 kernel: pnp: PnP ACPI init Sep 5 00:36:55.851858 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 5 00:36:55.851876 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:36:55.851885 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:36:55.851893 kernel: NET: Registered PF_INET protocol family Sep 5 00:36:55.851902 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:36:55.851910 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:36:55.851918 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:36:55.851926 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:36:55.851935 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:36:55.851943 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:36:55.851953 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:36:55.851961 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:36:55.851970 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:36:55.851978 kernel: NET: Registered PF_XDP protocol family Sep 5 00:36:55.852104 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 5 00:36:55.852227 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 5 00:36:55.852342 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:36:55.852452 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:36:55.852566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:36:55.852694 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 5 00:36:55.852820 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 5 00:36:55.852932 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 5 00:36:55.852942 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:36:55.852951 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 5 00:36:55.852960 kernel: Initialise system trusted keyrings Sep 5 00:36:55.852972 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:36:55.852980 kernel: Key type asymmetric registered Sep 5 00:36:55.852989 kernel: Asymmetric key parser 'x509' registered Sep 5 00:36:55.852997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:36:55.853005 kernel: io scheduler mq-deadline registered Sep 5 00:36:55.853013 kernel: io scheduler kyber registered Sep 5 00:36:55.853022 kernel: io scheduler bfq registered Sep 5 00:36:55.853032 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:36:55.853041 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:36:55.853049 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:36:55.853058 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:36:55.853066 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:36:55.853075 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:36:55.853083 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:36:55.853092 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:36:55.853100 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:36:55.853249 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:36:55.853263 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:36:55.853377 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:36:55.853491 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:36:55 UTC (1757032615) Sep 5 00:36:55.853604 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 5 00:36:55.853615 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:36:55.853623 kernel: efifb: probing for efifb Sep 5 00:36:55.853632 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 5 00:36:55.853643 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 5 00:36:55.853666 kernel: efifb: scrolling: redraw Sep 5 00:36:55.853687 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 5 00:36:55.853696 kernel: Console: switching to colour frame buffer device 160x50 Sep 5 00:36:55.853704 kernel: fb0: EFI VGA frame buffer device Sep 5 00:36:55.853713 kernel: pstore: Using crash dump compression: deflate Sep 5 00:36:55.853721 kernel: pstore: Registered efi_pstore as persistent store backend Sep 5 00:36:55.853730 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:36:55.853738 kernel: Segment Routing with IPv6 Sep 5 00:36:55.853749 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:36:55.853757 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:36:55.853766 kernel: Key type dns_resolver registered Sep 5 00:36:55.853773 kernel: IPI shorthand broadcast: enabled Sep 5 00:36:55.853782 kernel: sched_clock: Marking stable (3835004707, 266081040)->(4129855463, -28769716) Sep 5 00:36:55.853790 kernel: registered taskstats version 1 Sep 5 00:36:55.853798 kernel: Loading compiled-in X.509 certificates Sep 5 00:36:55.853815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 55c9ce6358d6eed45ca94030a2308729ee6a249f' Sep 5 00:36:55.853823 kernel: Demotion targets for Node 0: null Sep 5 00:36:55.853834 kernel: Key type .fscrypt registered Sep 5 00:36:55.853842 kernel: Key type fscrypt-provisioning registered Sep 5 00:36:55.853851 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:36:55.853859 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:36:55.853867 kernel: ima: No architecture policies found Sep 5 00:36:55.853876 kernel: clk: Disabling unused clocks Sep 5 00:36:55.853884 kernel: Warning: unable to open an initial console. Sep 5 00:36:55.853893 kernel: Freeing unused kernel image (initmem) memory: 54044K Sep 5 00:36:55.853904 kernel: Write protecting the kernel read-only data: 24576k Sep 5 00:36:55.853912 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 5 00:36:55.853920 kernel: Run /init as init process Sep 5 00:36:55.853928 kernel: with arguments: Sep 5 00:36:55.853936 kernel: /init Sep 5 00:36:55.853944 kernel: with environment: Sep 5 00:36:55.853952 kernel: HOME=/ Sep 5 00:36:55.853960 kernel: TERM=linux Sep 5 00:36:55.853969 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:36:55.853981 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:36:55.853995 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:36:55.854004 systemd[1]: Detected virtualization kvm. Sep 5 00:36:55.854012 systemd[1]: Detected architecture x86-64. Sep 5 00:36:55.854021 systemd[1]: Running in initrd. Sep 5 00:36:55.854029 systemd[1]: No hostname configured, using default hostname. Sep 5 00:36:55.854038 systemd[1]: Hostname set to . Sep 5 00:36:55.854046 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:36:55.854057 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:36:55.854065 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:36:55.854074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:36:55.854083 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:36:55.854092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:36:55.854101 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:36:55.854110 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:36:55.854122 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:36:55.854131 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:36:55.854140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:36:55.854148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:36:55.854157 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:36:55.854165 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:36:55.854173 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:36:55.854182 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:36:55.854193 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:36:55.854201 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:36:55.854210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:36:55.854219 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:36:55.854228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:36:55.854236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:36:55.854245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:36:55.854254 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:36:55.854264 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:36:55.854306 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:36:55.854316 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:36:55.854328 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 00:36:55.854340 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:36:55.854351 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:36:55.854363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:36:55.854374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:55.854386 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:36:55.854397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:36:55.854407 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:36:55.854416 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:36:55.854455 systemd-journald[220]: Collecting audit messages is disabled. Sep 5 00:36:55.854478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:55.854488 systemd-journald[220]: Journal started Sep 5 00:36:55.854512 systemd-journald[220]: Runtime Journal (/run/log/journal/1182dcad1b084ef0a045fe68cbc5e8dc) is 6M, max 48.4M, 42.4M free. Sep 5 00:36:55.840804 systemd-modules-load[223]: Inserted module 'overlay' Sep 5 00:36:55.858692 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:36:55.860231 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:36:55.865932 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:36:55.868853 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:36:55.873075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:36:55.875409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:36:55.877502 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 5 00:36:55.878790 kernel: Bridge firewalling registered Sep 5 00:36:55.881245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:36:55.882823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:36:55.892417 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 00:36:55.893742 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:36:55.897923 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:36:55.900506 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:36:55.901616 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:36:55.911861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:36:55.916197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:36:55.929645 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 00:36:55.977323 systemd-resolved[266]: Positive Trust Anchors: Sep 5 00:36:55.977342 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:36:55.977372 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:36:55.980409 systemd-resolved[266]: Defaulting to hostname 'linux'. Sep 5 00:36:55.981980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:36:56.003533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:36:56.091703 kernel: SCSI subsystem initialized Sep 5 00:36:56.102691 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:36:56.114698 kernel: iscsi: registered transport (tcp) Sep 5 00:36:56.138694 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:36:56.138758 kernel: QLogic iSCSI HBA Driver Sep 5 00:36:56.161286 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:36:56.191863 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:36:56.193855 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:36:56.252036 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:36:56.254851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:36:56.315702 kernel: raid6: avx2x4 gen() 27038 MB/s Sep 5 00:36:56.332693 kernel: raid6: avx2x2 gen() 24808 MB/s Sep 5 00:36:56.350003 kernel: raid6: avx2x1 gen() 16555 MB/s Sep 5 00:36:56.350052 kernel: raid6: using algorithm avx2x4 gen() 27038 MB/s Sep 5 00:36:56.367806 kernel: raid6: .... xor() 8113 MB/s, rmw enabled Sep 5 00:36:56.367855 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:36:56.388695 kernel: xor: automatically using best checksumming function avx Sep 5 00:36:56.553699 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:36:56.563135 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:36:56.566264 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:36:56.596276 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 5 00:36:56.601914 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:36:56.605771 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:36:56.637849 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Sep 5 00:36:56.672172 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:36:56.675201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:36:56.754972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:36:56.759994 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:36:56.819688 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:36:56.819756 kernel: libata version 3.00 loaded. Sep 5 00:36:56.819769 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:36:56.825766 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 00:36:56.832707 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:36:56.836479 kernel: AES CTR mode by8 optimization enabled Sep 5 00:36:56.836926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:36:56.842715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:36:56.842733 kernel: GPT:9289727 != 19775487 Sep 5 00:36:56.842744 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:36:56.842755 kernel: GPT:9289727 != 19775487 Sep 5 00:36:56.842765 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:36:56.842775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:56.842793 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:36:56.837058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:56.847519 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:36:56.847534 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 00:36:56.847712 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 00:36:56.848557 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:36:56.850209 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:56.854586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:56.857640 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:36:56.863253 kernel: scsi host0: ahci Sep 5 00:36:56.865692 kernel: scsi host1: ahci Sep 5 00:36:56.868276 kernel: scsi host2: ahci Sep 5 00:36:56.868488 kernel: scsi host3: ahci Sep 5 00:36:56.869803 kernel: scsi host4: ahci Sep 5 00:36:56.873109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:36:56.873522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:56.877499 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:36:56.890493 kernel: scsi host5: ahci Sep 5 00:36:56.890707 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 5 00:36:56.890725 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 5 00:36:56.890736 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 5 00:36:56.890746 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 5 00:36:56.890756 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 5 00:36:56.890767 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 5 00:36:56.905443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:36:56.913877 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:36:56.929272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:36:56.937361 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:36:56.938593 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:36:56.942672 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:36:56.945986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:36:56.968280 disk-uuid[634]: Primary Header is updated. Sep 5 00:36:56.968280 disk-uuid[634]: Secondary Entries is updated. Sep 5 00:36:56.968280 disk-uuid[634]: Secondary Header is updated. Sep 5 00:36:56.972698 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:56.980678 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:57.064329 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:36:57.202043 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:57.202119 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:57.202144 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:36:57.203685 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:57.203718 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:57.204681 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:36:57.205690 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:36:57.205703 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:36:57.206774 kernel: ata3.00: applying bridge limits Sep 5 00:36:57.207961 kernel: ata3.00: LPM support broken, forcing max_power Sep 5 00:36:57.207973 kernel: ata3.00: configured for UDMA/100 Sep 5 00:36:57.208689 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:36:57.269712 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:36:57.269982 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:36:57.295689 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:36:57.736213 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:36:57.738266 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:36:57.739603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:36:57.742502 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:36:57.745742 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:36:57.778895 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:36:57.979694 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:36:57.980875 disk-uuid[637]: The operation has completed successfully. Sep 5 00:36:58.026912 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:36:58.027067 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:36:58.073443 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:36:58.106333 sh[667]: Success Sep 5 00:36:58.127709 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:36:58.127896 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:36:58.127918 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 00:36:58.139703 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 5 00:36:58.174963 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:36:58.180556 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:36:58.208076 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:36:58.214712 kernel: BTRFS: device fsid bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (679) Sep 5 00:36:58.216978 kernel: BTRFS info (device dm-0): first mount of filesystem bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 Sep 5 00:36:58.217018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:58.222850 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:36:58.222915 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 00:36:58.224367 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:36:58.225967 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:36:58.227424 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:36:58.228606 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:36:58.232036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:36:58.269695 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 5 00:36:58.269783 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:58.271431 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:58.275422 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:36:58.275500 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:36:58.281726 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:58.283171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:36:58.285750 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:36:58.389112 ignition[753]: Ignition 2.21.0 Sep 5 00:36:58.389128 ignition[753]: Stage: fetch-offline Sep 5 00:36:58.389173 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:58.389187 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:58.389305 ignition[753]: parsed url from cmdline: "" Sep 5 00:36:58.389311 ignition[753]: no config URL provided Sep 5 00:36:58.389319 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:36:58.389331 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:36:58.389359 ignition[753]: op(1): [started] loading QEMU firmware config module Sep 5 00:36:58.389366 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:36:58.398406 ignition[753]: op(1): [finished] loading QEMU firmware config module Sep 5 00:36:58.400244 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:36:58.406079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:36:58.442695 ignition[753]: parsing config with SHA512: a1935a3ec9ba765a6e67d926cd1d3fdc0aa0abcb36a3893eda1aa23d5545cbea021892308fa48d1f7a3065413cbe5ea7d2547a13bab89b3050cf456fe5a37170 Sep 5 00:36:58.446542 unknown[753]: fetched base config from "system" Sep 5 00:36:58.446557 unknown[753]: fetched user config from "qemu" Sep 5 00:36:58.447106 ignition[753]: fetch-offline: fetch-offline passed Sep 5 00:36:58.450796 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:36:58.447192 ignition[753]: Ignition finished successfully Sep 5 00:36:58.453903 systemd-networkd[856]: lo: Link UP Sep 5 00:36:58.453908 systemd-networkd[856]: lo: Gained carrier Sep 5 00:36:58.455642 systemd-networkd[856]: Enumeration completed Sep 5 00:36:58.455817 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:36:58.456091 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:58.456096 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:36:58.457972 systemd-networkd[856]: eth0: Link UP Sep 5 00:36:58.457986 systemd[1]: Reached target network.target - Network. Sep 5 00:36:58.458141 systemd-networkd[856]: eth0: Gained carrier Sep 5 00:36:58.458159 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:36:58.459811 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:36:58.460932 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:36:58.475707 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:36:58.496520 ignition[860]: Ignition 2.21.0 Sep 5 00:36:58.496539 ignition[860]: Stage: kargs Sep 5 00:36:58.497088 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:58.497100 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:58.501131 ignition[860]: kargs: kargs passed Sep 5 00:36:58.501216 ignition[860]: Ignition finished successfully Sep 5 00:36:58.507089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:36:58.508485 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:36:58.538245 ignition[869]: Ignition 2.21.0 Sep 5 00:36:58.538260 ignition[869]: Stage: disks Sep 5 00:36:58.538389 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:58.538400 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:58.540016 ignition[869]: disks: disks passed Sep 5 00:36:58.540087 ignition[869]: Ignition finished successfully Sep 5 00:36:58.543723 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:36:58.545339 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:36:58.547322 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:36:58.548432 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:36:58.549161 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:36:58.549506 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:36:58.556187 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:36:58.589555 systemd-resolved[266]: Detected conflict on linux IN A 10.0.0.120 Sep 5 00:36:58.589577 systemd-resolved[266]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Sep 5 00:36:58.597461 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 00:36:58.606082 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:36:58.609520 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:36:58.723693 kernel: EXT4-fs (vda9): mounted filesystem a99dab41-6cdd-4037-a941-eeee48403b9e r/w with ordered data mode. Quota mode: none. Sep 5 00:36:58.725011 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:36:58.725643 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:36:58.728423 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:36:58.731745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:36:58.733652 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:36:58.733737 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:36:58.735564 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:36:58.747825 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:36:58.750500 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:36:58.754681 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 5 00:36:58.754712 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:58.756243 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:58.759053 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:36:58.759074 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:36:58.761477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:36:58.799152 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:36:58.803637 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:36:58.808802 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:36:58.813191 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:36:58.915604 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:36:58.917897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:36:58.920078 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:36:58.946684 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:58.967817 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:36:58.987204 ignition[1000]: INFO : Ignition 2.21.0 Sep 5 00:36:58.987204 ignition[1000]: INFO : Stage: mount Sep 5 00:36:58.989054 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:58.989054 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:58.991198 ignition[1000]: INFO : mount: mount passed Sep 5 00:36:58.991198 ignition[1000]: INFO : Ignition finished successfully Sep 5 00:36:58.992324 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:36:58.994835 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:36:59.214953 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:36:59.217013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:36:59.246697 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 5 00:36:59.246766 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 00:36:59.248686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:36:59.251683 kernel: BTRFS info (device vda6): turning on async discard Sep 5 00:36:59.251721 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 00:36:59.253330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:36:59.283684 ignition[1029]: INFO : Ignition 2.21.0 Sep 5 00:36:59.283684 ignition[1029]: INFO : Stage: files Sep 5 00:36:59.285723 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:36:59.285723 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:36:59.289544 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:36:59.291424 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:36:59.291424 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:36:59.295296 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:36:59.296777 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:36:59.298495 unknown[1029]: wrote ssh authorized keys file for user: core Sep 5 00:36:59.299855 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:36:59.301700 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:36:59.303878 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 00:36:59.351784 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:36:59.510673 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:36:59.510673 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:36:59.514565 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:36:59.514565 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:36:59.514565 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:36:59.514565 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:36:59.514565 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:36:59.514565 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:36:59.514565 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:36:59.642924 systemd-networkd[856]: eth0: Gained IPv6LL Sep 5 00:36:59.724444 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:36:59.726334 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:36:59.728037 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:36:59.746833 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:36:59.746833 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:36:59.751825 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 00:37:01.571690 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:37:03.248078 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:37:03.248078 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:37:03.252996 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:37:03.302341 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:37:03.302341 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:37:03.302341 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:37:03.307730 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:37:03.307730 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:37:03.307730 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:37:03.307730 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:37:03.339836 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:37:03.345603 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:37:03.350897 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:37:03.350897 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:37:03.350897 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:37:03.350897 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:37:03.350897 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:37:03.350897 ignition[1029]: INFO : files: files passed Sep 5 00:37:03.350897 ignition[1029]: INFO : Ignition finished successfully Sep 5 00:37:03.350110 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:37:03.355473 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:37:03.360397 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:37:03.380112 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:37:03.380261 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:37:03.383524 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:37:03.387779 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:37:03.389679 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:37:03.392026 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:37:03.395911 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:37:03.396901 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:37:03.401717 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:37:03.465410 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:37:03.465608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:37:03.468692 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:37:03.470769 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:37:03.473093 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:37:03.476119 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:37:03.521696 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:37:03.525082 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:37:03.693949 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:37:03.695537 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:37:03.698123 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:37:03.700482 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:37:03.700715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:37:03.704034 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:37:03.706080 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:37:03.706437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:37:03.707011 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:37:03.707358 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:37:03.707930 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 00:37:03.708295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:37:03.708699 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:37:03.709292 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:37:03.709680 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:37:03.710147 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:37:03.710525 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:37:03.710729 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:37:03.731505 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:37:03.733761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:37:03.736202 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:37:03.736423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:37:03.738813 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:37:03.738967 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:37:03.742550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:37:03.742748 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:37:03.744947 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:37:03.746762 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:37:03.747927 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:37:03.750144 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:37:03.751860 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:37:03.753901 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:37:03.754024 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:37:03.756727 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:37:03.756818 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:37:03.759491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:37:03.759633 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:37:03.761633 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:37:03.761782 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:37:03.765474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:37:03.767152 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:37:03.767289 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:37:03.769468 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:37:03.771622 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:37:03.771912 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:37:03.773873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:37:03.774010 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:37:03.784542 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:37:03.784742 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:37:03.805170 ignition[1084]: INFO : Ignition 2.21.0 Sep 5 00:37:03.805170 ignition[1084]: INFO : Stage: umount Sep 5 00:37:03.808833 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:37:03.808833 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:37:03.811694 ignition[1084]: INFO : umount: umount passed Sep 5 00:37:03.811694 ignition[1084]: INFO : Ignition finished successfully Sep 5 00:37:03.809562 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:37:03.814240 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:37:03.814423 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:37:03.815630 systemd[1]: Stopped target network.target - Network. Sep 5 00:37:03.819719 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:37:03.819835 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:37:03.820845 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:37:03.820912 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:37:03.822735 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:37:03.822818 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:37:03.823143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:37:03.823211 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:37:03.827387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:37:03.828263 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:37:03.830608 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:37:03.830899 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:37:03.832783 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:37:03.832924 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:37:03.836379 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:37:03.836637 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:37:03.842305 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:37:03.843866 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:37:03.843992 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:37:03.849824 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:37:03.850205 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:37:03.850366 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:37:03.854546 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:37:03.855242 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 00:37:03.856241 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:37:03.856320 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:37:03.862476 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:37:03.862565 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:37:03.862640 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:37:03.865808 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:37:03.865866 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:37:03.867933 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:37:03.867984 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:37:03.869245 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:37:03.870497 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:37:03.890595 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:37:03.896052 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:37:03.897954 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:37:03.898015 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:37:03.900344 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:37:03.900389 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:37:03.901346 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:37:03.901401 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:37:03.905330 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:37:03.905381 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:37:03.906372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:37:03.906433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:37:03.915274 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:37:03.918538 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 00:37:03.918624 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:37:03.922283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:37:03.922338 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:37:03.925891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:37:03.925950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:03.929891 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:37:03.932935 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:37:03.942562 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:37:03.942729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:37:03.945309 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:37:03.948164 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:37:03.971791 systemd[1]: Switching root. Sep 5 00:37:04.020218 systemd-journald[220]: Journal stopped Sep 5 00:37:05.759432 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 5 00:37:05.759508 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:37:05.759535 kernel: SELinux: policy capability open_perms=1 Sep 5 00:37:05.759553 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:37:05.759564 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:37:05.759580 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:37:05.759596 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:37:05.759608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:37:05.759619 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:37:05.759630 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 00:37:05.759642 kernel: audit: type=1403 audit(1757032624.843:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:37:05.759673 systemd[1]: Successfully loaded SELinux policy in 72.141ms. Sep 5 00:37:05.759696 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.071ms. Sep 5 00:37:05.759710 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:37:05.759728 systemd[1]: Detected virtualization kvm. Sep 5 00:37:05.759740 systemd[1]: Detected architecture x86-64. Sep 5 00:37:05.759752 systemd[1]: Detected first boot. Sep 5 00:37:05.759764 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:37:05.759776 zram_generator::config[1129]: No configuration found. Sep 5 00:37:05.759790 kernel: Guest personality initialized and is inactive Sep 5 00:37:05.759802 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 00:37:05.759816 kernel: Initialized host personality Sep 5 00:37:05.759832 kernel: NET: Registered PF_VSOCK protocol family Sep 5 00:37:05.759844 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:37:05.759857 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:37:05.759869 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:37:05.759884 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:37:05.759899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:37:05.759912 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:37:05.759927 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:37:05.759939 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:37:05.759956 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:37:05.759969 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:37:05.759981 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:37:05.759994 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:37:05.760009 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:37:05.760021 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:37:05.760033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:37:05.760045 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:37:05.760064 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:37:05.760077 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:37:05.760090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:37:05.760102 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:37:05.760114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:37:05.760126 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:37:05.760139 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:37:05.760156 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:37:05.760173 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:37:05.760188 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:37:05.760201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:37:05.760213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:37:05.760228 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:37:05.760240 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:37:05.760252 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:37:05.760264 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:37:05.760279 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:37:05.760296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:37:05.760308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:37:05.760321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:37:05.760333 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:37:05.760345 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:37:05.760357 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:37:05.760369 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:37:05.760381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:05.760393 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:37:05.760410 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:37:05.760422 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:37:05.760434 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:37:05.760447 systemd[1]: Reached target machines.target - Containers. Sep 5 00:37:05.760462 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:37:05.760474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:37:05.760487 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:37:05.760502 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:37:05.760526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:37:05.760539 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:37:05.760551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:37:05.760563 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:37:05.760578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:37:05.760591 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:37:05.760604 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:37:05.760616 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:37:05.760627 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:37:05.760644 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:37:05.760671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:37:05.760683 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:37:05.760696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:37:05.760707 kernel: loop: module loaded Sep 5 00:37:05.760721 kernel: fuse: init (API version 7.41) Sep 5 00:37:05.760733 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:37:05.760746 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:37:05.760761 kernel: ACPI: bus type drm_connector registered Sep 5 00:37:05.760787 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:37:05.760909 systemd-journald[1200]: Collecting audit messages is disabled. Sep 5 00:37:05.760943 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:37:05.760961 systemd-journald[1200]: Journal started Sep 5 00:37:05.760984 systemd-journald[1200]: Runtime Journal (/run/log/journal/1182dcad1b084ef0a045fe68cbc5e8dc) is 6M, max 48.4M, 42.4M free. Sep 5 00:37:05.763221 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:37:05.763253 systemd[1]: Stopped verity-setup.service. Sep 5 00:37:05.492030 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:37:05.517615 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:37:05.518217 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:37:05.768677 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:05.773678 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:37:05.775174 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:37:05.777784 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:37:05.779064 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:37:05.780231 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:37:05.781529 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:37:05.784696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:37:05.786248 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:37:05.787950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:37:05.789604 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:37:05.790040 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:37:05.791606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:37:05.791977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:37:05.793511 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:37:05.793942 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:37:05.795370 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:37:05.795636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:37:05.797767 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:37:05.798018 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:37:05.799449 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:37:05.799937 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:37:05.801463 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:37:05.803118 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:37:05.804726 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:37:05.806325 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:37:05.822850 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:37:05.825938 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:37:05.828471 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:37:05.829772 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:37:05.829804 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:37:05.832684 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:37:05.839181 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:37:05.840599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:37:05.842569 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:37:05.847063 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:37:05.848557 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:37:05.853578 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:37:05.855255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:37:05.861181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:37:05.866712 systemd-journald[1200]: Time spent on flushing to /var/log/journal/1182dcad1b084ef0a045fe68cbc5e8dc is 30.990ms for 1069 entries. Sep 5 00:37:05.866712 systemd-journald[1200]: System Journal (/var/log/journal/1182dcad1b084ef0a045fe68cbc5e8dc) is 8M, max 195.6M, 187.6M free. Sep 5 00:37:05.971410 systemd-journald[1200]: Received client request to flush runtime journal. Sep 5 00:37:05.971469 kernel: loop0: detected capacity change from 0 to 128016 Sep 5 00:37:05.867734 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:37:05.870901 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:37:05.875993 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:37:05.877961 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:37:05.895975 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:37:05.897859 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:37:05.902920 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:37:05.968973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:37:05.974100 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:37:05.985149 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:37:05.992699 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:37:06.001792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:37:06.014583 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:37:06.018027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:37:06.022724 kernel: loop1: detected capacity change from 0 to 229808 Sep 5 00:37:06.045790 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 5 00:37:06.046222 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 5 00:37:06.051334 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:37:06.057678 kernel: loop2: detected capacity change from 0 to 111000 Sep 5 00:37:06.099007 kernel: loop3: detected capacity change from 0 to 128016 Sep 5 00:37:06.196691 kernel: loop4: detected capacity change from 0 to 229808 Sep 5 00:37:06.206680 kernel: loop5: detected capacity change from 0 to 111000 Sep 5 00:37:06.213700 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:37:06.214458 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 5 00:37:06.277511 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:37:06.277529 systemd[1]: Reloading... Sep 5 00:37:06.393704 zram_generator::config[1296]: No configuration found. Sep 5 00:37:06.664738 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:37:06.774537 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:37:06.774905 systemd[1]: Reloading finished in 496 ms. Sep 5 00:37:06.806742 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:37:06.808338 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:37:06.863922 systemd[1]: Starting ensure-sysext.service... Sep 5 00:37:06.866701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:37:06.879378 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:37:06.879395 systemd[1]: Reloading... Sep 5 00:37:06.891904 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 00:37:06.892535 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 00:37:06.892973 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:37:06.893287 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:37:06.894216 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:37:06.894594 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 5 00:37:06.894748 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 5 00:37:06.899594 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:37:06.899820 systemd-tmpfiles[1336]: Skipping /boot Sep 5 00:37:06.912379 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:37:06.912478 systemd-tmpfiles[1336]: Skipping /boot Sep 5 00:37:06.954707 zram_generator::config[1363]: No configuration found. Sep 5 00:37:07.185515 systemd[1]: Reloading finished in 305 ms. Sep 5 00:37:07.212688 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:37:07.234052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:37:07.244417 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:37:07.247438 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:37:07.250841 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:37:07.269173 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:37:07.274381 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:37:07.277947 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:37:07.283248 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:07.283458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:37:07.285537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:37:07.289897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:37:07.299785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:37:07.301127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:37:07.301238 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:37:07.304176 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:37:07.305438 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:07.314134 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:37:07.319273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:37:07.319820 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:37:07.323297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:37:07.323815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:37:07.326920 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:37:07.329930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:37:07.340534 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Sep 5 00:37:07.344081 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:37:07.347852 augenrules[1435]: No rules Sep 5 00:37:07.351250 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:37:07.351577 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:37:07.355411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:07.356092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:37:07.358531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:37:07.361722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:37:07.370936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:37:07.377630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:37:07.380185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:37:07.380649 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:37:07.386036 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:37:07.420898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:37:07.446297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:37:07.448485 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:37:07.451626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:37:07.454017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:37:07.454890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:37:07.456718 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:37:07.456971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:37:07.458614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:37:07.459159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:37:07.461093 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:37:07.461310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:37:07.463375 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:37:07.474185 systemd[1]: Finished ensure-sysext.service. Sep 5 00:37:07.501715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:37:07.503140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:37:07.503231 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:37:07.506908 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:37:07.509089 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:37:07.541385 systemd-resolved[1405]: Positive Trust Anchors: Sep 5 00:37:07.541437 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:37:07.541484 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:37:07.607947 systemd-resolved[1405]: Defaulting to hostname 'linux'. Sep 5 00:37:07.635309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:37:07.637260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:37:07.638554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:37:07.710011 systemd-networkd[1486]: lo: Link UP Sep 5 00:37:07.710023 systemd-networkd[1486]: lo: Gained carrier Sep 5 00:37:07.713750 systemd-networkd[1486]: Enumeration completed Sep 5 00:37:07.714526 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:37:07.715156 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:37:07.715231 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:37:07.715923 systemd-networkd[1486]: eth0: Link UP Sep 5 00:37:07.716332 systemd[1]: Reached target network.target - Network. Sep 5 00:37:07.716340 systemd-networkd[1486]: eth0: Gained carrier Sep 5 00:37:07.716358 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:37:07.719896 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:37:07.726684 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:37:07.729698 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:37:07.731777 systemd-networkd[1486]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:37:07.732592 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 5 00:37:08.802626 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:37:08.802831 systemd-timesyncd[1487]: Initial clock synchronization to Fri 2025-09-05 00:37:08.802538 UTC. Sep 5 00:37:08.807385 systemd-resolved[1405]: Clock change detected. Flushing caches. Sep 5 00:37:08.809213 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:37:08.824938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:37:08.826490 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:37:08.866374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:37:08.874505 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:37:08.875960 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:37:08.889937 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 5 00:37:08.890291 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:37:08.890477 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:37:08.877017 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:37:08.878400 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:37:08.882007 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 00:37:08.883523 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:37:08.885046 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:37:08.885077 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:37:08.886155 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:37:08.887368 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:37:08.888580 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:37:08.890024 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:37:08.894260 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:37:08.898441 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:37:08.903121 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:37:08.904617 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 00:37:08.905960 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 00:37:08.910157 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:37:08.912053 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:37:08.917142 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:37:08.918935 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:37:08.921001 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:37:08.923100 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:37:08.924069 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:37:08.924096 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:37:08.925993 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:37:08.934197 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:37:08.937401 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:37:08.942075 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:37:08.944932 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:37:08.946981 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:37:08.952111 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 00:37:08.990181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:37:08.992031 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:37:08.995345 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:37:09.005078 jq[1525]: false Sep 5 00:37:09.000743 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:37:09.007969 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:37:09.013979 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Sep 5 00:37:09.010427 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Sep 5 00:37:09.010234 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:37:09.010813 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:37:09.017132 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:37:09.025277 extend-filesystems[1526]: Found /dev/vda6 Sep 5 00:37:09.022314 oslogin_cache_refresh[1527]: Failure getting users, quitting Sep 5 00:37:09.032925 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Sep 5 00:37:09.032925 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:37:09.032925 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Sep 5 00:37:09.032925 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Sep 5 00:37:09.032925 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:37:09.028055 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:37:09.022343 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 00:37:09.022420 oslogin_cache_refresh[1527]: Refreshing group entry cache Sep 5 00:37:09.029561 oslogin_cache_refresh[1527]: Failure getting groups, quitting Sep 5 00:37:09.029573 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 00:37:09.036564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:37:09.045423 update_engine[1537]: I20250905 00:37:09.045305 1537 main.cc:92] Flatcar Update Engine starting Sep 5 00:37:09.048258 extend-filesystems[1526]: Found /dev/vda9 Sep 5 00:37:09.052260 extend-filesystems[1526]: Checking size of /dev/vda9 Sep 5 00:37:09.050626 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:37:09.055761 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:37:09.056349 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:37:09.057237 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 00:37:09.057632 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 00:37:09.061405 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:37:09.073125 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:37:09.082792 jq[1538]: true Sep 5 00:37:09.105467 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:37:09.113301 extend-filesystems[1526]: Resized partition /dev/vda9 Sep 5 00:37:09.119971 extend-filesystems[1567]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 00:37:09.130957 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:37:09.133895 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:37:09.140183 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:37:09.141773 jq[1554]: true Sep 5 00:37:09.170770 tar[1548]: linux-amd64/LICENSE Sep 5 00:37:09.170770 tar[1548]: linux-amd64/helm Sep 5 00:37:09.200336 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:37:09.214048 dbus-daemon[1523]: [system] SELinux support is enabled Sep 5 00:37:09.225333 kernel: kvm_amd: TSC scaling supported Sep 5 00:37:09.225369 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:37:09.225396 kernel: kvm_amd: Nested Paging enabled Sep 5 00:37:09.225409 kernel: kvm_amd: LBR virtualization supported Sep 5 00:37:09.216561 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:37:09.225894 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:37:09.225894 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:37:09.225894 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:37:09.234044 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Sep 5 00:37:09.228266 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:37:09.236224 update_engine[1537]: I20250905 00:37:09.227783 1537 update_check_scheduler.cc:74] Next update check in 7m16s Sep 5 00:37:09.228548 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:37:09.229412 systemd-logind[1536]: Watching system buttons on /dev/input/event2 (Power Button) Sep 5 00:37:09.229438 systemd-logind[1536]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:37:09.230155 systemd-logind[1536]: New seat seat0. Sep 5 00:37:09.234450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:37:09.236137 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:37:09.236179 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:37:09.238438 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:37:09.239967 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:37:09.241021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:37:09.258340 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 00:37:09.258701 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:37:09.298484 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:37:09.319260 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:37:09.319414 kernel: kvm_amd: Virtual GIF supported Sep 5 00:37:09.300284 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:37:09.303387 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:37:09.320848 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:37:09.549017 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:37:09.553891 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:37:09.567269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:37:09.663484 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:37:09.694079 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:37:09.698546 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:37:09.766480 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:37:09.766860 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:37:09.770532 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:37:09.772423 containerd[1557]: time="2025-09-05T00:37:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 00:37:09.773123 containerd[1557]: time="2025-09-05T00:37:09.773105692Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788069906Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.861µs" Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788114469Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788136500Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788331166Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788345442Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788370569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788433487Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788445029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788729282Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788741846Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788757114Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 00:37:09.788890 containerd[1557]: time="2025-09-05T00:37:09.788765340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 00:37:09.789226 containerd[1557]: time="2025-09-05T00:37:09.788860528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 00:37:09.789514 containerd[1557]: time="2025-09-05T00:37:09.789495629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:37:09.789593 containerd[1557]: time="2025-09-05T00:37:09.789578124Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 00:37:09.789667 containerd[1557]: time="2025-09-05T00:37:09.789652644Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 00:37:09.789763 containerd[1557]: time="2025-09-05T00:37:09.789741791Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 00:37:09.790156 containerd[1557]: time="2025-09-05T00:37:09.790132183Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 00:37:09.790299 containerd[1557]: time="2025-09-05T00:37:09.790282826Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:37:09.798197 containerd[1557]: time="2025-09-05T00:37:09.798159187Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 00:37:09.798346 containerd[1557]: time="2025-09-05T00:37:09.798330307Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 00:37:09.798512 containerd[1557]: time="2025-09-05T00:37:09.798495537Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 00:37:09.798572 containerd[1557]: time="2025-09-05T00:37:09.798559487Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 00:37:09.798611 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798614180Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798626313Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798637123Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798659685Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798671247Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798690333Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798702185Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 00:37:09.798715 containerd[1557]: time="2025-09-05T00:37:09.798714728Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 00:37:09.798963 containerd[1557]: time="2025-09-05T00:37:09.798893844Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 00:37:09.798963 containerd[1557]: time="2025-09-05T00:37:09.798916407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 00:37:09.798963 containerd[1557]: time="2025-09-05T00:37:09.798929852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 00:37:09.798963 containerd[1557]: time="2025-09-05T00:37:09.798940101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 00:37:09.798963 containerd[1557]: time="2025-09-05T00:37:09.798949889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 00:37:09.798963 containerd[1557]: time="2025-09-05T00:37:09.798960800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 00:37:09.799132 containerd[1557]: time="2025-09-05T00:37:09.798972281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 00:37:09.799132 containerd[1557]: time="2025-09-05T00:37:09.798982891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 00:37:09.799132 containerd[1557]: time="2025-09-05T00:37:09.798993682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 00:37:09.799132 containerd[1557]: time="2025-09-05T00:37:09.799004231Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 00:37:09.799132 containerd[1557]: time="2025-09-05T00:37:09.799015893Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 00:37:09.799132 containerd[1557]: time="2025-09-05T00:37:09.799114729Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 00:37:09.799132 containerd[1557]: time="2025-09-05T00:37:09.799129416Z" level=info msg="Start snapshots syncer" Sep 5 00:37:09.799351 containerd[1557]: time="2025-09-05T00:37:09.799150405Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 00:37:09.799531 containerd[1557]: time="2025-09-05T00:37:09.799398781Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 00:37:09.801218 containerd[1557]: time="2025-09-05T00:37:09.801194198Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 00:37:09.801393 containerd[1557]: time="2025-09-05T00:37:09.801375829Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 00:37:09.801586 containerd[1557]: time="2025-09-05T00:37:09.801565605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 00:37:09.801675 containerd[1557]: time="2025-09-05T00:37:09.801660703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 00:37:09.801883 containerd[1557]: time="2025-09-05T00:37:09.801803271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 00:37:09.801883 containerd[1557]: time="2025-09-05T00:37:09.801832285Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 00:37:09.801883 containerd[1557]: time="2025-09-05T00:37:09.801854256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 00:37:09.801980 containerd[1557]: time="2025-09-05T00:37:09.801962930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 00:37:09.802043 containerd[1557]: time="2025-09-05T00:37:09.802027952Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 00:37:09.802138 containerd[1557]: time="2025-09-05T00:37:09.802120746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 00:37:09.802193 containerd[1557]: time="2025-09-05T00:37:09.802181249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 00:37:09.802250 containerd[1557]: time="2025-09-05T00:37:09.802237274Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 00:37:09.802378 containerd[1557]: time="2025-09-05T00:37:09.802336861Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:37:09.802378 containerd[1557]: time="2025-09-05T00:37:09.802359263Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802440255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802459742Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802468157Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802481172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802495358Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802513983Z" level=info msg="runtime interface created" Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802521998Z" level=info msg="created NRI interface" Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802529743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802545533Z" level=info msg="Connect containerd service" Sep 5 00:37:09.802620 containerd[1557]: time="2025-09-05T00:37:09.802574968Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:37:09.805168 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:37:09.811729 containerd[1557]: time="2025-09-05T00:37:09.808351010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:37:09.808074 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:37:09.809619 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:37:10.032566 tar[1548]: linux-amd64/README.md Sep 5 00:37:10.054352 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:37:10.101731 containerd[1557]: time="2025-09-05T00:37:10.101649589Z" level=info msg="Start subscribing containerd event" Sep 5 00:37:10.101918 containerd[1557]: time="2025-09-05T00:37:10.101774414Z" level=info msg="Start recovering state" Sep 5 00:37:10.102021 containerd[1557]: time="2025-09-05T00:37:10.101988765Z" level=info msg="Start event monitor" Sep 5 00:37:10.102082 containerd[1557]: time="2025-09-05T00:37:10.102039581Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:37:10.102082 containerd[1557]: time="2025-09-05T00:37:10.102075949Z" level=info msg="Start streaming server" Sep 5 00:37:10.102164 containerd[1557]: time="2025-09-05T00:37:10.102102198Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 00:37:10.102164 containerd[1557]: time="2025-09-05T00:37:10.102119701Z" level=info msg="runtime interface starting up..." Sep 5 00:37:10.102164 containerd[1557]: time="2025-09-05T00:37:10.102129940Z" level=info msg="starting plugins..." Sep 5 00:37:10.102164 containerd[1557]: time="2025-09-05T00:37:10.102161610Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 00:37:10.102370 containerd[1557]: time="2025-09-05T00:37:10.102256498Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:37:10.102370 containerd[1557]: time="2025-09-05T00:37:10.102349742Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:37:10.102532 containerd[1557]: time="2025-09-05T00:37:10.102507278Z" level=info msg="containerd successfully booted in 0.330657s" Sep 5 00:37:10.102716 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:37:10.247239 systemd-networkd[1486]: eth0: Gained IPv6LL Sep 5 00:37:10.252579 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:37:10.254884 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:37:10.259021 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:37:10.262543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:10.266032 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:37:10.330566 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:37:10.330948 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:37:10.333380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:37:10.336408 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:37:10.566721 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:37:10.569827 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:42916.service - OpenSSH per-connection server daemon (10.0.0.1:42916). Sep 5 00:37:10.721250 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 42916 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:10.723438 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:10.730977 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:37:10.733471 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:37:10.786661 systemd-logind[1536]: New session 1 of user core. Sep 5 00:37:10.803140 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:37:10.810543 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:37:10.845227 (systemd)[1669]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:37:10.848695 systemd-logind[1536]: New session c1 of user core. Sep 5 00:37:11.073543 systemd[1669]: Queued start job for default target default.target. Sep 5 00:37:11.085594 systemd[1669]: Created slice app.slice - User Application Slice. Sep 5 00:37:11.085638 systemd[1669]: Reached target paths.target - Paths. Sep 5 00:37:11.085703 systemd[1669]: Reached target timers.target - Timers. Sep 5 00:37:11.087800 systemd[1669]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:37:11.103754 systemd[1669]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:37:11.103975 systemd[1669]: Reached target sockets.target - Sockets. Sep 5 00:37:11.104037 systemd[1669]: Reached target basic.target - Basic System. Sep 5 00:37:11.104093 systemd[1669]: Reached target default.target - Main User Target. Sep 5 00:37:11.104137 systemd[1669]: Startup finished in 220ms. Sep 5 00:37:11.104632 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:37:11.108015 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:37:11.172464 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:42920.service - OpenSSH per-connection server daemon (10.0.0.1:42920). Sep 5 00:37:11.236440 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 42920 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:11.238438 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:11.243809 systemd-logind[1536]: New session 2 of user core. Sep 5 00:37:11.255222 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:37:11.313375 sshd[1683]: Connection closed by 10.0.0.1 port 42920 Sep 5 00:37:11.314026 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:11.323811 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:42920.service: Deactivated successfully. Sep 5 00:37:11.327314 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:37:11.329412 systemd-logind[1536]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:37:11.334062 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:42926.service - OpenSSH per-connection server daemon (10.0.0.1:42926). Sep 5 00:37:11.336822 systemd-logind[1536]: Removed session 2. Sep 5 00:37:11.394367 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 42926 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:11.396136 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:11.401434 systemd-logind[1536]: New session 3 of user core. Sep 5 00:37:11.411151 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:37:11.467894 sshd[1692]: Connection closed by 10.0.0.1 port 42926 Sep 5 00:37:11.468260 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:11.472630 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:42926.service: Deactivated successfully. Sep 5 00:37:11.474672 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:37:11.475720 systemd-logind[1536]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:37:11.477543 systemd-logind[1536]: Removed session 3. Sep 5 00:37:11.536588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:11.538712 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:37:11.540942 systemd[1]: Startup finished in 3.893s (kernel) + 9.194s (initrd) + 5.699s (userspace) = 18.787s. Sep 5 00:37:11.543432 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:37:12.629760 kubelet[1702]: E0905 00:37:12.629671 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:37:12.633842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:37:12.634059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:37:12.634457 systemd[1]: kubelet.service: Consumed 2.083s CPU time, 268.6M memory peak. Sep 5 00:37:21.480735 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:44286.service - OpenSSH per-connection server daemon (10.0.0.1:44286). Sep 5 00:37:21.548491 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 44286 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:21.550486 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:21.555754 systemd-logind[1536]: New session 4 of user core. Sep 5 00:37:21.567175 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:37:21.622924 sshd[1718]: Connection closed by 10.0.0.1 port 44286 Sep 5 00:37:21.623353 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:21.641263 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:44286.service: Deactivated successfully. Sep 5 00:37:21.643461 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:37:21.644431 systemd-logind[1536]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:37:21.647611 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:44294.service - OpenSSH per-connection server daemon (10.0.0.1:44294). Sep 5 00:37:21.648428 systemd-logind[1536]: Removed session 4. Sep 5 00:37:21.704231 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 44294 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:21.706026 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:21.711908 systemd-logind[1536]: New session 5 of user core. Sep 5 00:37:21.731050 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:37:21.781354 sshd[1727]: Connection closed by 10.0.0.1 port 44294 Sep 5 00:37:21.781781 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:21.797384 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:44294.service: Deactivated successfully. Sep 5 00:37:21.799962 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:37:21.800852 systemd-logind[1536]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:37:21.804564 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:44302.service - OpenSSH per-connection server daemon (10.0.0.1:44302). Sep 5 00:37:21.805414 systemd-logind[1536]: Removed session 5. Sep 5 00:37:21.857101 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 44302 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:21.858908 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:21.864423 systemd-logind[1536]: New session 6 of user core. Sep 5 00:37:21.875129 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:37:21.932781 sshd[1736]: Connection closed by 10.0.0.1 port 44302 Sep 5 00:37:21.933270 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:21.949131 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:44302.service: Deactivated successfully. Sep 5 00:37:21.951453 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:37:21.952384 systemd-logind[1536]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:37:21.955833 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:44316.service - OpenSSH per-connection server daemon (10.0.0.1:44316). Sep 5 00:37:21.956746 systemd-logind[1536]: Removed session 6. Sep 5 00:37:22.019160 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 44316 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:22.021063 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:22.026361 systemd-logind[1536]: New session 7 of user core. Sep 5 00:37:22.036041 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:37:22.098251 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:37:22.098673 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:22.117482 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:22.120070 sshd[1746]: Connection closed by 10.0.0.1 port 44316 Sep 5 00:37:22.121102 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:22.135835 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:44316.service: Deactivated successfully. Sep 5 00:37:22.138234 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:37:22.139250 systemd-logind[1536]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:37:22.142679 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:44318.service - OpenSSH per-connection server daemon (10.0.0.1:44318). Sep 5 00:37:22.143479 systemd-logind[1536]: Removed session 7. Sep 5 00:37:22.199847 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 44318 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:22.201979 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:22.208498 systemd-logind[1536]: New session 8 of user core. Sep 5 00:37:22.226150 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:37:22.283262 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:37:22.283704 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:22.294318 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:22.303592 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:37:22.304080 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:22.318458 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:37:22.376102 augenrules[1780]: No rules Sep 5 00:37:22.377845 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:37:22.378217 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:37:22.379499 sudo[1757]: pam_unix(sudo:session): session closed for user root Sep 5 00:37:22.381238 sshd[1756]: Connection closed by 10.0.0.1 port 44318 Sep 5 00:37:22.381602 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Sep 5 00:37:22.391684 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:44318.service: Deactivated successfully. Sep 5 00:37:22.393833 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:37:22.394801 systemd-logind[1536]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:37:22.398098 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:44328.service - OpenSSH per-connection server daemon (10.0.0.1:44328). Sep 5 00:37:22.398754 systemd-logind[1536]: Removed session 8. Sep 5 00:37:22.470981 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 44328 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:37:22.472492 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:37:22.477771 systemd-logind[1536]: New session 9 of user core. Sep 5 00:37:22.486111 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:37:22.542167 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:37:22.542645 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:37:22.741151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:37:22.743223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:23.153139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:23.180415 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:37:23.302891 kubelet[1820]: E0905 00:37:23.302821 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:37:23.310805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:37:23.311052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:37:23.311467 systemd[1]: kubelet.service: Consumed 372ms CPU time, 111.1M memory peak. Sep 5 00:37:23.520992 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:37:23.550982 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:37:24.087222 dockerd[1832]: time="2025-09-05T00:37:24.087132692Z" level=info msg="Starting up" Sep 5 00:37:24.088131 dockerd[1832]: time="2025-09-05T00:37:24.088093955Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 00:37:24.112968 dockerd[1832]: time="2025-09-05T00:37:24.112897270Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 00:37:24.514651 dockerd[1832]: time="2025-09-05T00:37:24.514560773Z" level=info msg="Loading containers: start." Sep 5 00:37:24.539919 kernel: Initializing XFRM netlink socket Sep 5 00:37:24.868611 systemd-networkd[1486]: docker0: Link UP Sep 5 00:37:24.898198 dockerd[1832]: time="2025-09-05T00:37:24.898115575Z" level=info msg="Loading containers: done." Sep 5 00:37:24.921804 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1512900539-merged.mount: Deactivated successfully. Sep 5 00:37:24.924362 dockerd[1832]: time="2025-09-05T00:37:24.924271086Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:37:24.924540 dockerd[1832]: time="2025-09-05T00:37:24.924406159Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 00:37:24.924582 dockerd[1832]: time="2025-09-05T00:37:24.924561420Z" level=info msg="Initializing buildkit" Sep 5 00:37:24.969013 dockerd[1832]: time="2025-09-05T00:37:24.968956677Z" level=info msg="Completed buildkit initialization" Sep 5 00:37:24.977017 dockerd[1832]: time="2025-09-05T00:37:24.976953584Z" level=info msg="Daemon has completed initialization" Sep 5 00:37:24.977145 dockerd[1832]: time="2025-09-05T00:37:24.977072527Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:37:24.977277 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:37:26.329229 containerd[1557]: time="2025-09-05T00:37:26.329096184Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:37:27.828343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191766779.mount: Deactivated successfully. Sep 5 00:37:29.448364 containerd[1557]: time="2025-09-05T00:37:29.448278334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:29.449104 containerd[1557]: time="2025-09-05T00:37:29.449013994Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 00:37:29.450400 containerd[1557]: time="2025-09-05T00:37:29.450342776Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:29.453254 containerd[1557]: time="2025-09-05T00:37:29.453203390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:29.454493 containerd[1557]: time="2025-09-05T00:37:29.454464716Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 3.125301717s" Sep 5 00:37:29.454562 containerd[1557]: time="2025-09-05T00:37:29.454502677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 00:37:29.455640 containerd[1557]: time="2025-09-05T00:37:29.455589104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:37:31.280572 containerd[1557]: time="2025-09-05T00:37:31.280475047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:31.281452 containerd[1557]: time="2025-09-05T00:37:31.281420610Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 00:37:31.284401 containerd[1557]: time="2025-09-05T00:37:31.283544293Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:31.288325 containerd[1557]: time="2025-09-05T00:37:31.288249066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:31.289392 containerd[1557]: time="2025-09-05T00:37:31.289321948Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.833685685s" Sep 5 00:37:31.289392 containerd[1557]: time="2025-09-05T00:37:31.289368616Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 00:37:31.290215 containerd[1557]: time="2025-09-05T00:37:31.290164939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:37:32.886940 containerd[1557]: time="2025-09-05T00:37:32.886852248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:32.887769 containerd[1557]: time="2025-09-05T00:37:32.887711950Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 00:37:32.888812 containerd[1557]: time="2025-09-05T00:37:32.888773801Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:32.891695 containerd[1557]: time="2025-09-05T00:37:32.891655686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:32.892823 containerd[1557]: time="2025-09-05T00:37:32.892769164Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.602545655s" Sep 5 00:37:32.892823 containerd[1557]: time="2025-09-05T00:37:32.892812235Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 00:37:32.893744 containerd[1557]: time="2025-09-05T00:37:32.893673861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:37:33.491141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:37:33.493002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:33.767344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:33.771812 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:37:34.098678 kubelet[2121]: E0905 00:37:34.098485 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:37:34.103564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:37:34.103834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:37:34.104338 systemd[1]: kubelet.service: Consumed 268ms CPU time, 110.8M memory peak. Sep 5 00:37:34.680207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231359724.mount: Deactivated successfully. Sep 5 00:37:35.296054 containerd[1557]: time="2025-09-05T00:37:35.295963561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:35.299003 containerd[1557]: time="2025-09-05T00:37:35.298941406Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 00:37:35.301097 containerd[1557]: time="2025-09-05T00:37:35.301015986Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:35.303835 containerd[1557]: time="2025-09-05T00:37:35.303771674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:35.304405 containerd[1557]: time="2025-09-05T00:37:35.304351121Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.410623559s" Sep 5 00:37:35.304405 containerd[1557]: time="2025-09-05T00:37:35.304396185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 00:37:35.304979 containerd[1557]: time="2025-09-05T00:37:35.304949724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:37:35.954784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152753974.mount: Deactivated successfully. Sep 5 00:37:37.230060 containerd[1557]: time="2025-09-05T00:37:37.229967008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:37.230918 containerd[1557]: time="2025-09-05T00:37:37.230833343Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 00:37:37.232403 containerd[1557]: time="2025-09-05T00:37:37.232310783Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:37.235930 containerd[1557]: time="2025-09-05T00:37:37.235828681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:37.237165 containerd[1557]: time="2025-09-05T00:37:37.237117157Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.931963832s" Sep 5 00:37:37.237165 containerd[1557]: time="2025-09-05T00:37:37.237154467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 00:37:37.237681 containerd[1557]: time="2025-09-05T00:37:37.237607907Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:37:37.872499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975466511.mount: Deactivated successfully. Sep 5 00:37:37.880332 containerd[1557]: time="2025-09-05T00:37:37.880230013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:37:37.881319 containerd[1557]: time="2025-09-05T00:37:37.881270094Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:37:37.882759 containerd[1557]: time="2025-09-05T00:37:37.882624073Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:37:37.885467 containerd[1557]: time="2025-09-05T00:37:37.885416029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:37:37.886504 containerd[1557]: time="2025-09-05T00:37:37.886457061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 648.795894ms" Sep 5 00:37:37.886504 containerd[1557]: time="2025-09-05T00:37:37.886506324Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:37:37.887225 containerd[1557]: time="2025-09-05T00:37:37.887147807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:37:38.560531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607752736.mount: Deactivated successfully. Sep 5 00:37:40.989158 containerd[1557]: time="2025-09-05T00:37:40.989055110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:40.990115 containerd[1557]: time="2025-09-05T00:37:40.990073550Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 00:37:40.991817 containerd[1557]: time="2025-09-05T00:37:40.991700611Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:40.995322 containerd[1557]: time="2025-09-05T00:37:40.995254376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:37:40.997013 containerd[1557]: time="2025-09-05T00:37:40.996950427Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.109745452s" Sep 5 00:37:40.997013 containerd[1557]: time="2025-09-05T00:37:40.996991574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 00:37:44.241306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 00:37:44.243096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:44.255738 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:37:44.255882 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:37:44.256202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:44.258650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:44.287093 systemd[1]: Reload requested from client PID 2281 ('systemctl') (unit session-9.scope)... Sep 5 00:37:44.287115 systemd[1]: Reloading... Sep 5 00:37:44.385900 zram_generator::config[2327]: No configuration found. Sep 5 00:37:44.719110 systemd[1]: Reloading finished in 431 ms. Sep 5 00:37:44.804930 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:37:44.805046 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:37:44.805372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:44.805424 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.2M memory peak. Sep 5 00:37:44.807408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:45.012190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:45.031266 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:37:45.127182 kubelet[2372]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:45.127182 kubelet[2372]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:37:45.127182 kubelet[2372]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:45.127182 kubelet[2372]: I0905 00:37:45.126675 2372 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:37:45.549091 kubelet[2372]: I0905 00:37:45.548852 2372 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:37:45.549091 kubelet[2372]: I0905 00:37:45.549075 2372 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:37:45.549380 kubelet[2372]: I0905 00:37:45.549358 2372 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:37:45.578974 kubelet[2372]: I0905 00:37:45.578721 2372 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:37:45.579578 kubelet[2372]: E0905 00:37:45.579548 2372 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:37:45.585896 kubelet[2372]: I0905 00:37:45.585817 2372 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:37:45.591773 kubelet[2372]: I0905 00:37:45.591743 2372 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:37:45.592129 kubelet[2372]: I0905 00:37:45.592081 2372 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:37:45.592285 kubelet[2372]: I0905 00:37:45.592117 2372 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:37:45.592394 kubelet[2372]: I0905 00:37:45.592289 2372 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:37:45.592394 kubelet[2372]: I0905 00:37:45.592299 2372 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:37:45.593253 kubelet[2372]: I0905 00:37:45.593224 2372 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:45.595881 kubelet[2372]: I0905 00:37:45.595842 2372 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:37:45.595923 kubelet[2372]: I0905 00:37:45.595900 2372 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:37:45.595954 kubelet[2372]: I0905 00:37:45.595940 2372 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:37:45.597608 kubelet[2372]: I0905 00:37:45.597582 2372 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:37:45.605762 kubelet[2372]: E0905 00:37:45.605550 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:37:45.605762 kubelet[2372]: I0905 00:37:45.605755 2372 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:37:45.606246 kubelet[2372]: E0905 00:37:45.606205 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:37:45.606343 kubelet[2372]: I0905 00:37:45.606296 2372 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:37:45.607171 kubelet[2372]: W0905 00:37:45.607141 2372 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:37:45.613514 kubelet[2372]: I0905 00:37:45.613478 2372 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:37:45.613569 kubelet[2372]: I0905 00:37:45.613558 2372 server.go:1289] "Started kubelet" Sep 5 00:37:45.615394 kubelet[2372]: I0905 00:37:45.615239 2372 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:37:45.615760 kubelet[2372]: I0905 00:37:45.615694 2372 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:37:45.616449 kubelet[2372]: I0905 00:37:45.616407 2372 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:37:45.616758 kubelet[2372]: I0905 00:37:45.616718 2372 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:37:45.620400 kubelet[2372]: I0905 00:37:45.618754 2372 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:37:45.620400 kubelet[2372]: I0905 00:37:45.619053 2372 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:37:45.620400 kubelet[2372]: E0905 00:37:45.619571 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:45.620400 kubelet[2372]: I0905 00:37:45.619628 2372 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:37:45.620400 kubelet[2372]: E0905 00:37:45.618451 2372 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623be674a6a582 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:37:45.613510018 +0000 UTC m=+0.577551431,LastTimestamp:2025-09-05 00:37:45.613510018 +0000 UTC m=+0.577551431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:37:45.620400 kubelet[2372]: I0905 00:37:45.619729 2372 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:37:45.620400 kubelet[2372]: I0905 00:37:45.619861 2372 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:37:45.620400 kubelet[2372]: E0905 00:37:45.620270 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" Sep 5 00:37:45.620657 kubelet[2372]: E0905 00:37:45.620356 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:37:45.621459 kubelet[2372]: E0905 00:37:45.621438 2372 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:37:45.621678 kubelet[2372]: I0905 00:37:45.621650 2372 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:37:45.621819 kubelet[2372]: I0905 00:37:45.621730 2372 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:37:45.623021 kubelet[2372]: I0905 00:37:45.622996 2372 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:37:45.627902 kubelet[2372]: I0905 00:37:45.627858 2372 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:37:45.641618 kubelet[2372]: I0905 00:37:45.641594 2372 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:37:45.641618 kubelet[2372]: I0905 00:37:45.641611 2372 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:37:45.641705 kubelet[2372]: I0905 00:37:45.641629 2372 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:45.646694 kubelet[2372]: I0905 00:37:45.646655 2372 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:37:45.646739 kubelet[2372]: I0905 00:37:45.646713 2372 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:37:45.646739 kubelet[2372]: I0905 00:37:45.646734 2372 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:37:45.646801 kubelet[2372]: I0905 00:37:45.646742 2372 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:37:45.646801 kubelet[2372]: E0905 00:37:45.646786 2372 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:37:45.647507 kubelet[2372]: E0905 00:37:45.647465 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:37:45.720247 kubelet[2372]: E0905 00:37:45.720173 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:45.747564 kubelet[2372]: E0905 00:37:45.747489 2372 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:37:45.820977 kubelet[2372]: E0905 00:37:45.820768 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:45.821329 kubelet[2372]: E0905 00:37:45.821282 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" Sep 5 00:37:45.921657 kubelet[2372]: E0905 00:37:45.921595 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:45.948027 kubelet[2372]: E0905 00:37:45.947967 2372 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:37:46.022575 kubelet[2372]: E0905 00:37:46.022529 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.123162 kubelet[2372]: E0905 00:37:46.123029 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.222263 kubelet[2372]: E0905 00:37:46.222195 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" Sep 5 00:37:46.223148 kubelet[2372]: E0905 00:37:46.223122 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.323743 kubelet[2372]: E0905 00:37:46.323672 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.349027 kubelet[2372]: E0905 00:37:46.348961 2372 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:37:46.424587 kubelet[2372]: E0905 00:37:46.424445 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.472351 kubelet[2372]: E0905 00:37:46.472298 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:37:46.525140 kubelet[2372]: E0905 00:37:46.525064 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.554752 kubelet[2372]: E0905 00:37:46.554693 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:37:46.569655 kubelet[2372]: E0905 00:37:46.569618 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:37:46.625990 kubelet[2372]: E0905 00:37:46.625927 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.681081 kubelet[2372]: I0905 00:37:46.680856 2372 policy_none.go:49] "None policy: Start" Sep 5 00:37:46.681081 kubelet[2372]: I0905 00:37:46.680978 2372 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:37:46.681081 kubelet[2372]: I0905 00:37:46.680999 2372 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:37:46.727024 kubelet[2372]: E0905 00:37:46.726964 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.827523 kubelet[2372]: E0905 00:37:46.827463 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:46.852510 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:37:46.876029 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:37:46.880520 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:37:46.898282 kubelet[2372]: E0905 00:37:46.898217 2372 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:37:46.898482 kubelet[2372]: I0905 00:37:46.898468 2372 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:37:46.898524 kubelet[2372]: I0905 00:37:46.898483 2372 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:37:46.898815 kubelet[2372]: I0905 00:37:46.898787 2372 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:37:46.900791 kubelet[2372]: E0905 00:37:46.900757 2372 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:37:46.901097 kubelet[2372]: E0905 00:37:46.901060 2372 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:37:46.952149 kubelet[2372]: E0905 00:37:46.951984 2372 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:37:47.000786 kubelet[2372]: I0905 00:37:47.000730 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:37:47.001284 kubelet[2372]: E0905 00:37:47.001221 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Sep 5 00:37:47.022883 kubelet[2372]: E0905 00:37:47.022811 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="1.6s" Sep 5 00:37:47.161681 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:37:47.181815 kubelet[2372]: E0905 00:37:47.181760 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:47.186276 systemd[1]: Created slice kubepods-burstable-podedf364ba12bc390b8f9896f818e9b41c.slice - libcontainer container kubepods-burstable-podedf364ba12bc390b8f9896f818e9b41c.slice. Sep 5 00:37:47.188658 kubelet[2372]: E0905 00:37:47.188597 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:47.190499 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:37:47.192322 kubelet[2372]: E0905 00:37:47.192289 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:47.203746 kubelet[2372]: I0905 00:37:47.203611 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:37:47.204087 kubelet[2372]: E0905 00:37:47.204053 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Sep 5 00:37:47.229773 kubelet[2372]: I0905 00:37:47.229700 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edf364ba12bc390b8f9896f818e9b41c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"edf364ba12bc390b8f9896f818e9b41c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:47.230261 kubelet[2372]: I0905 00:37:47.229885 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edf364ba12bc390b8f9896f818e9b41c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"edf364ba12bc390b8f9896f818e9b41c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:47.230261 kubelet[2372]: I0905 00:37:47.229955 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:47.230261 kubelet[2372]: I0905 00:37:47.229989 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:47.230261 kubelet[2372]: I0905 00:37:47.230015 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edf364ba12bc390b8f9896f818e9b41c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"edf364ba12bc390b8f9896f818e9b41c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:47.230261 kubelet[2372]: I0905 00:37:47.230038 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:47.230443 kubelet[2372]: I0905 00:37:47.230058 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:47.230443 kubelet[2372]: I0905 00:37:47.230085 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:47.230443 kubelet[2372]: I0905 00:37:47.230154 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:47.483306 kubelet[2372]: E0905 00:37:47.483150 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:47.484244 containerd[1557]: time="2025-09-05T00:37:47.484191286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:47.489386 kubelet[2372]: E0905 00:37:47.489362 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:47.490008 containerd[1557]: time="2025-09-05T00:37:47.489945771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:edf364ba12bc390b8f9896f818e9b41c,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:47.493070 kubelet[2372]: E0905 00:37:47.493047 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:47.493435 containerd[1557]: time="2025-09-05T00:37:47.493408282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:37:47.534894 containerd[1557]: time="2025-09-05T00:37:47.534327977Z" level=info msg="connecting to shim 03e1e594ad48a4ff6197f30ee7cee39b91dc4ec68cd9353ad5bd3dc613eb6b28" address="unix:///run/containerd/s/e6e8c6fd9bc35b5814f84e2fb3a20d41ed9ca03438e17b123fd16707607c7ffa" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:47.549691 containerd[1557]: time="2025-09-05T00:37:47.549639538Z" level=info msg="connecting to shim 98f90efdb7daafcd94e3f6f2f55b26cab8561b42d9dd8dbc50a414af8ea0e3b4" address="unix:///run/containerd/s/446ac4cf4047166e85cd57710b22eb36e2b980559b9b7972290c31494b9b6ce9" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:47.551128 containerd[1557]: time="2025-09-05T00:37:47.551069132Z" level=info msg="connecting to shim 1650bc329da7b63b32cfa63d40bb62f4e4a254eea89aaafd5f09afc4b3d0bac3" address="unix:///run/containerd/s/0d46654082897959fd2b8fa30fc96b7dd662e3a65c2901d03918318e52a5b5c6" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:37:47.603692 systemd[1]: Started cri-containerd-03e1e594ad48a4ff6197f30ee7cee39b91dc4ec68cd9353ad5bd3dc613eb6b28.scope - libcontainer container 03e1e594ad48a4ff6197f30ee7cee39b91dc4ec68cd9353ad5bd3dc613eb6b28. Sep 5 00:37:47.605629 kubelet[2372]: I0905 00:37:47.605598 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:37:47.606019 kubelet[2372]: E0905 00:37:47.605989 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Sep 5 00:37:47.608999 systemd[1]: Started cri-containerd-1650bc329da7b63b32cfa63d40bb62f4e4a254eea89aaafd5f09afc4b3d0bac3.scope - libcontainer container 1650bc329da7b63b32cfa63d40bb62f4e4a254eea89aaafd5f09afc4b3d0bac3. Sep 5 00:37:47.617765 systemd[1]: Started cri-containerd-98f90efdb7daafcd94e3f6f2f55b26cab8561b42d9dd8dbc50a414af8ea0e3b4.scope - libcontainer container 98f90efdb7daafcd94e3f6f2f55b26cab8561b42d9dd8dbc50a414af8ea0e3b4. Sep 5 00:37:47.740730 kubelet[2372]: E0905 00:37:47.740695 2372 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:37:47.751148 containerd[1557]: time="2025-09-05T00:37:47.751110798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:edf364ba12bc390b8f9896f818e9b41c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1650bc329da7b63b32cfa63d40bb62f4e4a254eea89aaafd5f09afc4b3d0bac3\"" Sep 5 00:37:47.752346 kubelet[2372]: E0905 00:37:47.752318 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:47.885732 containerd[1557]: time="2025-09-05T00:37:47.885614708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"98f90efdb7daafcd94e3f6f2f55b26cab8561b42d9dd8dbc50a414af8ea0e3b4\"" Sep 5 00:37:47.886559 containerd[1557]: time="2025-09-05T00:37:47.886522114Z" level=info msg="CreateContainer within sandbox \"1650bc329da7b63b32cfa63d40bb62f4e4a254eea89aaafd5f09afc4b3d0bac3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:37:47.886741 kubelet[2372]: E0905 00:37:47.886697 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:47.892597 containerd[1557]: time="2025-09-05T00:37:47.892533871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"03e1e594ad48a4ff6197f30ee7cee39b91dc4ec68cd9353ad5bd3dc613eb6b28\"" Sep 5 00:37:47.893378 kubelet[2372]: E0905 00:37:47.893352 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:47.897758 containerd[1557]: time="2025-09-05T00:37:47.897707455Z" level=info msg="CreateContainer within sandbox \"98f90efdb7daafcd94e3f6f2f55b26cab8561b42d9dd8dbc50a414af8ea0e3b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:37:47.905760 containerd[1557]: time="2025-09-05T00:37:47.905432848Z" level=info msg="CreateContainer within sandbox \"03e1e594ad48a4ff6197f30ee7cee39b91dc4ec68cd9353ad5bd3dc613eb6b28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:37:47.919840 containerd[1557]: time="2025-09-05T00:37:47.919745830Z" level=info msg="Container c56c866e9db90242b971d484ff2d11d43764ef39f9b0aca584ed43c818a83a59: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:47.922356 containerd[1557]: time="2025-09-05T00:37:47.922300045Z" level=info msg="Container 2aa9261e552f7f300100d40f85650c2e3d7bc5f0037d2302c2ca2e37f9cce6db: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:47.928572 containerd[1557]: time="2025-09-05T00:37:47.928520991Z" level=info msg="Container 32a8c054081861e28d35718b83d5dc4d005cb4b71ff2cb971dc47d30046c8a39: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:37:47.931839 containerd[1557]: time="2025-09-05T00:37:47.931786536Z" level=info msg="CreateContainer within sandbox \"1650bc329da7b63b32cfa63d40bb62f4e4a254eea89aaafd5f09afc4b3d0bac3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c56c866e9db90242b971d484ff2d11d43764ef39f9b0aca584ed43c818a83a59\"" Sep 5 00:37:47.933103 containerd[1557]: time="2025-09-05T00:37:47.932823529Z" level=info msg="StartContainer for \"c56c866e9db90242b971d484ff2d11d43764ef39f9b0aca584ed43c818a83a59\"" Sep 5 00:37:47.934900 containerd[1557]: time="2025-09-05T00:37:47.934833423Z" level=info msg="connecting to shim c56c866e9db90242b971d484ff2d11d43764ef39f9b0aca584ed43c818a83a59" address="unix:///run/containerd/s/0d46654082897959fd2b8fa30fc96b7dd662e3a65c2901d03918318e52a5b5c6" protocol=ttrpc version=3 Sep 5 00:37:47.940938 containerd[1557]: time="2025-09-05T00:37:47.940905305Z" level=info msg="CreateContainer within sandbox \"98f90efdb7daafcd94e3f6f2f55b26cab8561b42d9dd8dbc50a414af8ea0e3b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2aa9261e552f7f300100d40f85650c2e3d7bc5f0037d2302c2ca2e37f9cce6db\"" Sep 5 00:37:47.941584 containerd[1557]: time="2025-09-05T00:37:47.941554266Z" level=info msg="StartContainer for \"2aa9261e552f7f300100d40f85650c2e3d7bc5f0037d2302c2ca2e37f9cce6db\"" Sep 5 00:37:47.942668 containerd[1557]: time="2025-09-05T00:37:47.942634742Z" level=info msg="connecting to shim 2aa9261e552f7f300100d40f85650c2e3d7bc5f0037d2302c2ca2e37f9cce6db" address="unix:///run/containerd/s/446ac4cf4047166e85cd57710b22eb36e2b980559b9b7972290c31494b9b6ce9" protocol=ttrpc version=3 Sep 5 00:37:47.943479 containerd[1557]: time="2025-09-05T00:37:47.943415195Z" level=info msg="CreateContainer within sandbox \"03e1e594ad48a4ff6197f30ee7cee39b91dc4ec68cd9353ad5bd3dc613eb6b28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32a8c054081861e28d35718b83d5dc4d005cb4b71ff2cb971dc47d30046c8a39\"" Sep 5 00:37:47.944099 containerd[1557]: time="2025-09-05T00:37:47.944062954Z" level=info msg="StartContainer for \"32a8c054081861e28d35718b83d5dc4d005cb4b71ff2cb971dc47d30046c8a39\"" Sep 5 00:37:47.945353 containerd[1557]: time="2025-09-05T00:37:47.945320097Z" level=info msg="connecting to shim 32a8c054081861e28d35718b83d5dc4d005cb4b71ff2cb971dc47d30046c8a39" address="unix:///run/containerd/s/e6e8c6fd9bc35b5814f84e2fb3a20d41ed9ca03438e17b123fd16707607c7ffa" protocol=ttrpc version=3 Sep 5 00:37:47.959041 systemd[1]: Started cri-containerd-c56c866e9db90242b971d484ff2d11d43764ef39f9b0aca584ed43c818a83a59.scope - libcontainer container c56c866e9db90242b971d484ff2d11d43764ef39f9b0aca584ed43c818a83a59. Sep 5 00:37:47.964515 systemd[1]: Started cri-containerd-2aa9261e552f7f300100d40f85650c2e3d7bc5f0037d2302c2ca2e37f9cce6db.scope - libcontainer container 2aa9261e552f7f300100d40f85650c2e3d7bc5f0037d2302c2ca2e37f9cce6db. Sep 5 00:37:48.102084 systemd[1]: Started cri-containerd-32a8c054081861e28d35718b83d5dc4d005cb4b71ff2cb971dc47d30046c8a39.scope - libcontainer container 32a8c054081861e28d35718b83d5dc4d005cb4b71ff2cb971dc47d30046c8a39. Sep 5 00:37:48.162205 containerd[1557]: time="2025-09-05T00:37:48.162120483Z" level=info msg="StartContainer for \"c56c866e9db90242b971d484ff2d11d43764ef39f9b0aca584ed43c818a83a59\" returns successfully" Sep 5 00:37:48.163833 containerd[1557]: time="2025-09-05T00:37:48.163779411Z" level=info msg="StartContainer for \"2aa9261e552f7f300100d40f85650c2e3d7bc5f0037d2302c2ca2e37f9cce6db\" returns successfully" Sep 5 00:37:48.180998 containerd[1557]: time="2025-09-05T00:37:48.180912466Z" level=info msg="StartContainer for \"32a8c054081861e28d35718b83d5dc4d005cb4b71ff2cb971dc47d30046c8a39\" returns successfully" Sep 5 00:37:48.409696 kubelet[2372]: I0905 00:37:48.409066 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:37:48.662388 kubelet[2372]: E0905 00:37:48.662232 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:48.662828 kubelet[2372]: E0905 00:37:48.662559 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:48.663516 kubelet[2372]: E0905 00:37:48.663473 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:48.663716 kubelet[2372]: E0905 00:37:48.663632 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:48.666662 kubelet[2372]: E0905 00:37:48.666635 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:48.666817 kubelet[2372]: E0905 00:37:48.666793 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:49.672949 kubelet[2372]: E0905 00:37:49.672904 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:49.673402 kubelet[2372]: E0905 00:37:49.673077 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:49.673402 kubelet[2372]: E0905 00:37:49.673355 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:49.674888 kubelet[2372]: E0905 00:37:49.673463 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:49.674888 kubelet[2372]: E0905 00:37:49.673558 2372 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:37:49.674888 kubelet[2372]: E0905 00:37:49.673650 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:50.182712 kubelet[2372]: E0905 00:37:50.182659 2372 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:37:50.356928 kubelet[2372]: I0905 00:37:50.356840 2372 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:37:50.356928 kubelet[2372]: E0905 00:37:50.356931 2372 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:37:50.420275 kubelet[2372]: I0905 00:37:50.420194 2372 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:50.477101 kubelet[2372]: E0905 00:37:50.476948 2372 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:50.477101 kubelet[2372]: I0905 00:37:50.476989 2372 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:50.478799 kubelet[2372]: E0905 00:37:50.478769 2372 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:50.478799 kubelet[2372]: I0905 00:37:50.478789 2372 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:50.480390 kubelet[2372]: E0905 00:37:50.480349 2372 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:50.604833 kubelet[2372]: I0905 00:37:50.604756 2372 apiserver.go:52] "Watching apiserver" Sep 5 00:37:50.620630 kubelet[2372]: I0905 00:37:50.620562 2372 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:37:50.673833 kubelet[2372]: I0905 00:37:50.673786 2372 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:50.683379 kubelet[2372]: E0905 00:37:50.683303 2372 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:50.683656 kubelet[2372]: E0905 00:37:50.683613 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:51.334009 kubelet[2372]: I0905 00:37:51.333963 2372 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:51.339460 kubelet[2372]: E0905 00:37:51.339427 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:51.675352 kubelet[2372]: E0905 00:37:51.675210 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:52.466914 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-9.scope)... Sep 5 00:37:52.466932 systemd[1]: Reloading... Sep 5 00:37:52.559905 zram_generator::config[2705]: No configuration found. Sep 5 00:37:53.061999 systemd[1]: Reloading finished in 594 ms. Sep 5 00:37:53.096274 kubelet[2372]: I0905 00:37:53.096155 2372 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:37:53.096330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:53.120326 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:37:53.120684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:53.120742 systemd[1]: kubelet.service: Consumed 1.317s CPU time, 132.1M memory peak. Sep 5 00:37:53.122790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:37:53.352637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:37:53.365377 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:37:53.402932 kubelet[2750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:53.402932 kubelet[2750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:37:53.402932 kubelet[2750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:37:53.403357 kubelet[2750]: I0905 00:37:53.402973 2750 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:37:53.409580 kubelet[2750]: I0905 00:37:53.409537 2750 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:37:53.409580 kubelet[2750]: I0905 00:37:53.409562 2750 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:37:53.410317 kubelet[2750]: I0905 00:37:53.410277 2750 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:37:53.412399 kubelet[2750]: I0905 00:37:53.412355 2750 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:37:53.415386 kubelet[2750]: I0905 00:37:53.415345 2750 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:37:53.419371 kubelet[2750]: I0905 00:37:53.419322 2750 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 00:37:53.426022 kubelet[2750]: I0905 00:37:53.425980 2750 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:37:53.426326 kubelet[2750]: I0905 00:37:53.426260 2750 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:37:53.426489 kubelet[2750]: I0905 00:37:53.426303 2750 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:37:53.426489 kubelet[2750]: I0905 00:37:53.426481 2750 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:37:53.426489 kubelet[2750]: I0905 00:37:53.426490 2750 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:37:53.426662 kubelet[2750]: I0905 00:37:53.426552 2750 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:53.426745 kubelet[2750]: I0905 00:37:53.426724 2750 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:37:53.426787 kubelet[2750]: I0905 00:37:53.426762 2750 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:37:53.426971 kubelet[2750]: I0905 00:37:53.426939 2750 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:37:53.426971 kubelet[2750]: I0905 00:37:53.426961 2750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:37:53.428471 kubelet[2750]: I0905 00:37:53.428436 2750 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 00:37:53.430244 kubelet[2750]: I0905 00:37:53.430208 2750 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:37:53.437889 kubelet[2750]: I0905 00:37:53.437452 2750 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:37:53.437889 kubelet[2750]: I0905 00:37:53.437591 2750 server.go:1289] "Started kubelet" Sep 5 00:37:53.438319 kubelet[2750]: I0905 00:37:53.438276 2750 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:37:53.438709 kubelet[2750]: I0905 00:37:53.438442 2750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:37:53.440733 kubelet[2750]: I0905 00:37:53.439175 2750 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:37:53.442282 kubelet[2750]: I0905 00:37:53.442173 2750 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:37:53.444533 kubelet[2750]: I0905 00:37:53.443227 2750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:37:53.444533 kubelet[2750]: I0905 00:37:53.444435 2750 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:37:53.449173 kubelet[2750]: E0905 00:37:53.449145 2750 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:37:53.449173 kubelet[2750]: I0905 00:37:53.449178 2750 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:37:53.449428 kubelet[2750]: I0905 00:37:53.449412 2750 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:37:53.449590 kubelet[2750]: I0905 00:37:53.449569 2750 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:37:53.452442 kubelet[2750]: I0905 00:37:53.451505 2750 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:37:53.454901 kubelet[2750]: I0905 00:37:53.453987 2750 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:37:53.454901 kubelet[2750]: I0905 00:37:53.454556 2750 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:37:53.454901 kubelet[2750]: I0905 00:37:53.454569 2750 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:37:53.464571 kubelet[2750]: I0905 00:37:53.464386 2750 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:37:53.464571 kubelet[2750]: I0905 00:37:53.464425 2750 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:37:53.464571 kubelet[2750]: I0905 00:37:53.464450 2750 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:37:53.464571 kubelet[2750]: I0905 00:37:53.464459 2750 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:37:53.464571 kubelet[2750]: E0905 00:37:53.464527 2750 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:37:53.496307 kubelet[2750]: I0905 00:37:53.496277 2750 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496452 2750 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496474 2750 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496609 2750 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496622 2750 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496639 2750 policy_none.go:49] "None policy: Start" Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496649 2750 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496659 2750 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:37:53.497577 kubelet[2750]: I0905 00:37:53.496746 2750 state_mem.go:75] "Updated machine memory state" Sep 5 00:37:53.501558 kubelet[2750]: E0905 00:37:53.501527 2750 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:37:53.501769 kubelet[2750]: I0905 00:37:53.501753 2750 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:37:53.501808 kubelet[2750]: I0905 00:37:53.501772 2750 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:37:53.502497 kubelet[2750]: I0905 00:37:53.502473 2750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:37:53.503423 kubelet[2750]: E0905 00:37:53.503365 2750 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:37:53.566089 kubelet[2750]: I0905 00:37:53.565900 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:53.566089 kubelet[2750]: I0905 00:37:53.565927 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:53.566684 kubelet[2750]: I0905 00:37:53.566572 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:53.573557 kubelet[2750]: E0905 00:37:53.573528 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:53.607503 kubelet[2750]: I0905 00:37:53.607349 2750 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:37:53.615686 kubelet[2750]: I0905 00:37:53.615644 2750 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:37:53.615912 kubelet[2750]: I0905 00:37:53.615748 2750 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:37:53.650230 kubelet[2750]: I0905 00:37:53.650163 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edf364ba12bc390b8f9896f818e9b41c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"edf364ba12bc390b8f9896f818e9b41c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:53.650230 kubelet[2750]: I0905 00:37:53.650212 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:53.650230 kubelet[2750]: I0905 00:37:53.650236 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:53.650488 kubelet[2750]: I0905 00:37:53.650261 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:53.650488 kubelet[2750]: I0905 00:37:53.650280 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edf364ba12bc390b8f9896f818e9b41c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"edf364ba12bc390b8f9896f818e9b41c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:53.650488 kubelet[2750]: I0905 00:37:53.650298 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:53.650488 kubelet[2750]: I0905 00:37:53.650317 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:37:53.650488 kubelet[2750]: I0905 00:37:53.650361 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:53.650647 kubelet[2750]: I0905 00:37:53.650380 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edf364ba12bc390b8f9896f818e9b41c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"edf364ba12bc390b8f9896f818e9b41c\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:53.873048 kubelet[2750]: E0905 00:37:53.872919 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:53.873806 kubelet[2750]: E0905 00:37:53.873781 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:53.873950 kubelet[2750]: E0905 00:37:53.873824 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:54.429127 kubelet[2750]: I0905 00:37:54.429084 2750 apiserver.go:52] "Watching apiserver" Sep 5 00:37:54.450205 kubelet[2750]: I0905 00:37:54.450172 2750 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:37:54.480938 kubelet[2750]: I0905 00:37:54.479235 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:54.480938 kubelet[2750]: E0905 00:37:54.479277 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:54.480938 kubelet[2750]: I0905 00:37:54.479365 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:54.577655 kubelet[2750]: E0905 00:37:54.577568 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:37:54.577655 kubelet[2750]: E0905 00:37:54.577590 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:37:54.577935 kubelet[2750]: E0905 00:37:54.577842 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:54.578195 kubelet[2750]: E0905 00:37:54.578169 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:54.727765 kubelet[2750]: I0905 00:37:54.727539 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.727416563 podStartE2EDuration="1.727416563s" podCreationTimestamp="2025-09-05 00:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:54.727337943 +0000 UTC m=+1.357372201" watchObservedRunningTime="2025-09-05 00:37:54.727416563 +0000 UTC m=+1.357450811" Sep 5 00:37:54.728036 kubelet[2750]: I0905 00:37:54.727764 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.727750046 podStartE2EDuration="3.727750046s" podCreationTimestamp="2025-09-05 00:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:54.651137145 +0000 UTC m=+1.281171383" watchObservedRunningTime="2025-09-05 00:37:54.727750046 +0000 UTC m=+1.357784274" Sep 5 00:37:54.754958 update_engine[1537]: I20250905 00:37:54.754805 1537 update_attempter.cc:509] Updating boot flags... Sep 5 00:37:54.761728 kubelet[2750]: I0905 00:37:54.761622 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.761585016 podStartE2EDuration="1.761585016s" podCreationTimestamp="2025-09-05 00:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:37:54.743521224 +0000 UTC m=+1.373555462" watchObservedRunningTime="2025-09-05 00:37:54.761585016 +0000 UTC m=+1.391619435" Sep 5 00:37:55.481893 kubelet[2750]: E0905 00:37:55.481215 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:55.481893 kubelet[2750]: E0905 00:37:55.481825 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:56.483442 kubelet[2750]: E0905 00:37:56.483396 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:57.032383 kubelet[2750]: E0905 00:37:57.032310 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:59.043993 kubelet[2750]: E0905 00:37:59.043928 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:37:59.366897 kubelet[2750]: I0905 00:37:59.366766 2750 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:37:59.367275 containerd[1557]: time="2025-09-05T00:37:59.367231040Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:37:59.367604 kubelet[2750]: I0905 00:37:59.367435 2750 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:37:59.488071 kubelet[2750]: E0905 00:37:59.488012 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:00.490173 kubelet[2750]: E0905 00:38:00.490132 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:00.665487 kubelet[2750]: E0905 00:38:00.665435 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:00.885074 systemd[1]: Created slice kubepods-besteffort-podc21687d9_f490_49a7_9d60_ef8c2433f2b3.slice - libcontainer container kubepods-besteffort-podc21687d9_f490_49a7_9d60_ef8c2433f2b3.slice. Sep 5 00:38:00.894485 kubelet[2750]: I0905 00:38:00.894376 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c21687d9-f490-49a7-9d60-ef8c2433f2b3-kube-proxy\") pod \"kube-proxy-jlzmh\" (UID: \"c21687d9-f490-49a7-9d60-ef8c2433f2b3\") " pod="kube-system/kube-proxy-jlzmh" Sep 5 00:38:00.894912 kubelet[2750]: I0905 00:38:00.894758 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c21687d9-f490-49a7-9d60-ef8c2433f2b3-xtables-lock\") pod \"kube-proxy-jlzmh\" (UID: \"c21687d9-f490-49a7-9d60-ef8c2433f2b3\") " pod="kube-system/kube-proxy-jlzmh" Sep 5 00:38:00.895079 kubelet[2750]: I0905 00:38:00.895020 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c21687d9-f490-49a7-9d60-ef8c2433f2b3-lib-modules\") pod \"kube-proxy-jlzmh\" (UID: \"c21687d9-f490-49a7-9d60-ef8c2433f2b3\") " pod="kube-system/kube-proxy-jlzmh" Sep 5 00:38:00.895231 kubelet[2750]: I0905 00:38:00.895174 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmvwd\" (UniqueName: \"kubernetes.io/projected/c21687d9-f490-49a7-9d60-ef8c2433f2b3-kube-api-access-rmvwd\") pod \"kube-proxy-jlzmh\" (UID: \"c21687d9-f490-49a7-9d60-ef8c2433f2b3\") " pod="kube-system/kube-proxy-jlzmh" Sep 5 00:38:01.194485 kubelet[2750]: E0905 00:38:01.194351 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:01.195263 containerd[1557]: time="2025-09-05T00:38:01.195192734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jlzmh,Uid:c21687d9-f490-49a7-9d60-ef8c2433f2b3,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:01.389910 systemd[1]: Created slice kubepods-besteffort-pod90f02c05_c39e_4adb_8fca_a7ff0f9d2535.slice - libcontainer container kubepods-besteffort-pod90f02c05_c39e_4adb_8fca_a7ff0f9d2535.slice. Sep 5 00:38:01.397168 containerd[1557]: time="2025-09-05T00:38:01.397114788Z" level=info msg="connecting to shim 7b12e4010d18448227bebd3b49bccdf908e14bed2d8e732884fddf6b62c2cc0e" address="unix:///run/containerd/s/f0f34c5b6a4a79e9f9ae2daff3f578a356a722ac8476cab8fa4c1f8c08767a73" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:01.399239 kubelet[2750]: I0905 00:38:01.399205 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/90f02c05-c39e-4adb-8fca-a7ff0f9d2535-var-lib-calico\") pod \"tigera-operator-755d956888-j5lpg\" (UID: \"90f02c05-c39e-4adb-8fca-a7ff0f9d2535\") " pod="tigera-operator/tigera-operator-755d956888-j5lpg" Sep 5 00:38:01.399474 kubelet[2750]: I0905 00:38:01.399441 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs5j8\" (UniqueName: \"kubernetes.io/projected/90f02c05-c39e-4adb-8fca-a7ff0f9d2535-kube-api-access-vs5j8\") pod \"tigera-operator-755d956888-j5lpg\" (UID: \"90f02c05-c39e-4adb-8fca-a7ff0f9d2535\") " pod="tigera-operator/tigera-operator-755d956888-j5lpg" Sep 5 00:38:01.422125 systemd[1]: Started cri-containerd-7b12e4010d18448227bebd3b49bccdf908e14bed2d8e732884fddf6b62c2cc0e.scope - libcontainer container 7b12e4010d18448227bebd3b49bccdf908e14bed2d8e732884fddf6b62c2cc0e. Sep 5 00:38:01.454360 containerd[1557]: time="2025-09-05T00:38:01.454212744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jlzmh,Uid:c21687d9-f490-49a7-9d60-ef8c2433f2b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b12e4010d18448227bebd3b49bccdf908e14bed2d8e732884fddf6b62c2cc0e\"" Sep 5 00:38:01.455346 kubelet[2750]: E0905 00:38:01.455302 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:01.463509 containerd[1557]: time="2025-09-05T00:38:01.463456257Z" level=info msg="CreateContainer within sandbox \"7b12e4010d18448227bebd3b49bccdf908e14bed2d8e732884fddf6b62c2cc0e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:38:01.476027 containerd[1557]: time="2025-09-05T00:38:01.475938083Z" level=info msg="Container ffffe8948119c789e6fb7d84915e857f33bc5aaf966ace9ea6ac79d09872dff7: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:01.480616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522601146.mount: Deactivated successfully. Sep 5 00:38:01.488263 containerd[1557]: time="2025-09-05T00:38:01.488203529Z" level=info msg="CreateContainer within sandbox \"7b12e4010d18448227bebd3b49bccdf908e14bed2d8e732884fddf6b62c2cc0e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ffffe8948119c789e6fb7d84915e857f33bc5aaf966ace9ea6ac79d09872dff7\"" Sep 5 00:38:01.489147 containerd[1557]: time="2025-09-05T00:38:01.488814734Z" level=info msg="StartContainer for \"ffffe8948119c789e6fb7d84915e857f33bc5aaf966ace9ea6ac79d09872dff7\"" Sep 5 00:38:01.493154 containerd[1557]: time="2025-09-05T00:38:01.493016476Z" level=info msg="connecting to shim ffffe8948119c789e6fb7d84915e857f33bc5aaf966ace9ea6ac79d09872dff7" address="unix:///run/containerd/s/f0f34c5b6a4a79e9f9ae2daff3f578a356a722ac8476cab8fa4c1f8c08767a73" protocol=ttrpc version=3 Sep 5 00:38:01.505565 kubelet[2750]: E0905 00:38:01.505514 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:01.554158 systemd[1]: Started cri-containerd-ffffe8948119c789e6fb7d84915e857f33bc5aaf966ace9ea6ac79d09872dff7.scope - libcontainer container ffffe8948119c789e6fb7d84915e857f33bc5aaf966ace9ea6ac79d09872dff7. Sep 5 00:38:01.607401 containerd[1557]: time="2025-09-05T00:38:01.607347843Z" level=info msg="StartContainer for \"ffffe8948119c789e6fb7d84915e857f33bc5aaf966ace9ea6ac79d09872dff7\" returns successfully" Sep 5 00:38:01.697186 containerd[1557]: time="2025-09-05T00:38:01.697121728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j5lpg,Uid:90f02c05-c39e-4adb-8fca-a7ff0f9d2535,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:38:01.750143 containerd[1557]: time="2025-09-05T00:38:01.750064448Z" level=info msg="connecting to shim 13cb2063cdcba8b653d3bc09929470fa8205a88a5edda2ee742f7e9c81e5d5b7" address="unix:///run/containerd/s/e6324a64cc93bbac05be924a1c91b3c0804dc30f875950dab604f178f10e19a8" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:01.789039 systemd[1]: Started cri-containerd-13cb2063cdcba8b653d3bc09929470fa8205a88a5edda2ee742f7e9c81e5d5b7.scope - libcontainer container 13cb2063cdcba8b653d3bc09929470fa8205a88a5edda2ee742f7e9c81e5d5b7. Sep 5 00:38:01.842278 containerd[1557]: time="2025-09-05T00:38:01.842229453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j5lpg,Uid:90f02c05-c39e-4adb-8fca-a7ff0f9d2535,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"13cb2063cdcba8b653d3bc09929470fa8205a88a5edda2ee742f7e9c81e5d5b7\"" Sep 5 00:38:01.844580 containerd[1557]: time="2025-09-05T00:38:01.844526295Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:38:02.507740 kubelet[2750]: E0905 00:38:02.507650 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:02.518520 kubelet[2750]: I0905 00:38:02.518440 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jlzmh" podStartSLOduration=2.518420278 podStartE2EDuration="2.518420278s" podCreationTimestamp="2025-09-05 00:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:02.518153433 +0000 UTC m=+9.148187681" watchObservedRunningTime="2025-09-05 00:38:02.518420278 +0000 UTC m=+9.148454516" Sep 5 00:38:03.509819 kubelet[2750]: E0905 00:38:03.509782 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:03.527427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898406299.mount: Deactivated successfully. Sep 5 00:38:04.596217 containerd[1557]: time="2025-09-05T00:38:04.596156316Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:04.597033 containerd[1557]: time="2025-09-05T00:38:04.597006822Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:38:04.598334 containerd[1557]: time="2025-09-05T00:38:04.598277671Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:04.600780 containerd[1557]: time="2025-09-05T00:38:04.600717577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:04.601591 containerd[1557]: time="2025-09-05T00:38:04.601561880Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.756984069s" Sep 5 00:38:04.601643 containerd[1557]: time="2025-09-05T00:38:04.601595103Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:38:04.606348 containerd[1557]: time="2025-09-05T00:38:04.606308270Z" level=info msg="CreateContainer within sandbox \"13cb2063cdcba8b653d3bc09929470fa8205a88a5edda2ee742f7e9c81e5d5b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:38:04.615435 containerd[1557]: time="2025-09-05T00:38:04.615381337Z" level=info msg="Container 0daf144796f5cf866057f7dda271c99f46132c78724228a04b9c39963aa5a2d5: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:04.619134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211128470.mount: Deactivated successfully. Sep 5 00:38:04.625982 containerd[1557]: time="2025-09-05T00:38:04.625935170Z" level=info msg="CreateContainer within sandbox \"13cb2063cdcba8b653d3bc09929470fa8205a88a5edda2ee742f7e9c81e5d5b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0daf144796f5cf866057f7dda271c99f46132c78724228a04b9c39963aa5a2d5\"" Sep 5 00:38:04.626436 containerd[1557]: time="2025-09-05T00:38:04.626411750Z" level=info msg="StartContainer for \"0daf144796f5cf866057f7dda271c99f46132c78724228a04b9c39963aa5a2d5\"" Sep 5 00:38:04.627273 containerd[1557]: time="2025-09-05T00:38:04.627248068Z" level=info msg="connecting to shim 0daf144796f5cf866057f7dda271c99f46132c78724228a04b9c39963aa5a2d5" address="unix:///run/containerd/s/e6324a64cc93bbac05be924a1c91b3c0804dc30f875950dab604f178f10e19a8" protocol=ttrpc version=3 Sep 5 00:38:04.677030 systemd[1]: Started cri-containerd-0daf144796f5cf866057f7dda271c99f46132c78724228a04b9c39963aa5a2d5.scope - libcontainer container 0daf144796f5cf866057f7dda271c99f46132c78724228a04b9c39963aa5a2d5. Sep 5 00:38:04.711211 containerd[1557]: time="2025-09-05T00:38:04.711163076Z" level=info msg="StartContainer for \"0daf144796f5cf866057f7dda271c99f46132c78724228a04b9c39963aa5a2d5\" returns successfully" Sep 5 00:38:05.525616 kubelet[2750]: I0905 00:38:05.525540 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-j5lpg" podStartSLOduration=2.766932887 podStartE2EDuration="5.525516848s" podCreationTimestamp="2025-09-05 00:38:00 +0000 UTC" firstStartedPulling="2025-09-05 00:38:01.843611054 +0000 UTC m=+8.473645293" lastFinishedPulling="2025-09-05 00:38:04.602195016 +0000 UTC m=+11.232229254" observedRunningTime="2025-09-05 00:38:05.525386292 +0000 UTC m=+12.155420540" watchObservedRunningTime="2025-09-05 00:38:05.525516848 +0000 UTC m=+12.155551086" Sep 5 00:38:07.109310 kubelet[2750]: E0905 00:38:07.109151 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:07.519797 kubelet[2750]: E0905 00:38:07.519751 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:11.083383 sudo[1793]: pam_unix(sudo:session): session closed for user root Sep 5 00:38:11.085144 sshd[1792]: Connection closed by 10.0.0.1 port 44328 Sep 5 00:38:11.099201 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:11.121762 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:44328.service: Deactivated successfully. Sep 5 00:38:11.124857 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:38:11.125192 systemd[1]: session-9.scope: Consumed 6.566s CPU time, 229.8M memory peak. Sep 5 00:38:11.127286 systemd-logind[1536]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:38:11.129495 systemd-logind[1536]: Removed session 9. Sep 5 00:38:17.674332 systemd[1]: Created slice kubepods-besteffort-pode5e6383c_00a6_4d2c_8f9c_69853ef73af7.slice - libcontainer container kubepods-besteffort-pode5e6383c_00a6_4d2c_8f9c_69853ef73af7.slice. Sep 5 00:38:17.710175 kubelet[2750]: I0905 00:38:17.710070 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5e6383c-00a6-4d2c-8f9c-69853ef73af7-tigera-ca-bundle\") pod \"calico-typha-f4c756d-vxwzz\" (UID: \"e5e6383c-00a6-4d2c-8f9c-69853ef73af7\") " pod="calico-system/calico-typha-f4c756d-vxwzz" Sep 5 00:38:17.710175 kubelet[2750]: I0905 00:38:17.710178 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5e6383c-00a6-4d2c-8f9c-69853ef73af7-typha-certs\") pod \"calico-typha-f4c756d-vxwzz\" (UID: \"e5e6383c-00a6-4d2c-8f9c-69853ef73af7\") " pod="calico-system/calico-typha-f4c756d-vxwzz" Sep 5 00:38:17.710972 kubelet[2750]: I0905 00:38:17.710201 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgdsv\" (UniqueName: \"kubernetes.io/projected/e5e6383c-00a6-4d2c-8f9c-69853ef73af7-kube-api-access-sgdsv\") pod \"calico-typha-f4c756d-vxwzz\" (UID: \"e5e6383c-00a6-4d2c-8f9c-69853ef73af7\") " pod="calico-system/calico-typha-f4c756d-vxwzz" Sep 5 00:38:17.743339 systemd[1]: Created slice kubepods-besteffort-pod7ea80397_a5dc_4227_b7aa_6c6d7a4aa60b.slice - libcontainer container kubepods-besteffort-pod7ea80397_a5dc_4227_b7aa_6c6d7a4aa60b.slice. Sep 5 00:38:17.811340 kubelet[2750]: I0905 00:38:17.811280 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-var-run-calico\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.811901 kubelet[2750]: I0905 00:38:17.811828 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-var-lib-calico\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.811970 kubelet[2750]: I0905 00:38:17.811922 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-policysync\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.811970 kubelet[2750]: I0905 00:38:17.811938 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-cni-net-dir\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812497 kubelet[2750]: I0905 00:38:17.812381 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-node-certs\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812497 kubelet[2750]: I0905 00:38:17.812407 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-cni-bin-dir\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812497 kubelet[2750]: I0905 00:38:17.812420 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-cni-log-dir\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812497 kubelet[2750]: I0905 00:38:17.812433 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-flexvol-driver-host\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812497 kubelet[2750]: I0905 00:38:17.812458 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-lib-modules\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812659 kubelet[2750]: I0905 00:38:17.812471 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-tigera-ca-bundle\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812659 kubelet[2750]: I0905 00:38:17.812501 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-xtables-lock\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.812659 kubelet[2750]: I0905 00:38:17.812519 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjcsb\" (UniqueName: \"kubernetes.io/projected/7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b-kube-api-access-hjcsb\") pod \"calico-node-tdz7w\" (UID: \"7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b\") " pod="calico-system/calico-node-tdz7w" Sep 5 00:38:17.866459 kubelet[2750]: E0905 00:38:17.866390 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:17.913023 kubelet[2750]: I0905 00:38:17.912978 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3d2da156-f5da-43b6-8661-d14dd051f3ef-socket-dir\") pod \"csi-node-driver-vhkp8\" (UID: \"3d2da156-f5da-43b6-8661-d14dd051f3ef\") " pod="calico-system/csi-node-driver-vhkp8" Sep 5 00:38:17.913023 kubelet[2750]: I0905 00:38:17.913027 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswkf\" (UniqueName: \"kubernetes.io/projected/3d2da156-f5da-43b6-8661-d14dd051f3ef-kube-api-access-sswkf\") pod \"csi-node-driver-vhkp8\" (UID: \"3d2da156-f5da-43b6-8661-d14dd051f3ef\") " pod="calico-system/csi-node-driver-vhkp8" Sep 5 00:38:17.913231 kubelet[2750]: I0905 00:38:17.913082 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3d2da156-f5da-43b6-8661-d14dd051f3ef-varrun\") pod \"csi-node-driver-vhkp8\" (UID: \"3d2da156-f5da-43b6-8661-d14dd051f3ef\") " pod="calico-system/csi-node-driver-vhkp8" Sep 5 00:38:17.913410 kubelet[2750]: I0905 00:38:17.913394 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3d2da156-f5da-43b6-8661-d14dd051f3ef-registration-dir\") pod \"csi-node-driver-vhkp8\" (UID: \"3d2da156-f5da-43b6-8661-d14dd051f3ef\") " pod="calico-system/csi-node-driver-vhkp8" Sep 5 00:38:17.913478 kubelet[2750]: I0905 00:38:17.913427 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d2da156-f5da-43b6-8661-d14dd051f3ef-kubelet-dir\") pod \"csi-node-driver-vhkp8\" (UID: \"3d2da156-f5da-43b6-8661-d14dd051f3ef\") " pod="calico-system/csi-node-driver-vhkp8" Sep 5 00:38:17.917890 kubelet[2750]: E0905 00:38:17.917835 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:17.917946 kubelet[2750]: W0905 00:38:17.917906 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:17.944991 kubelet[2750]: E0905 00:38:17.944323 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:17.944991 kubelet[2750]: E0905 00:38:17.944698 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:17.944991 kubelet[2750]: W0905 00:38:17.944712 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:17.944991 kubelet[2750]: E0905 00:38:17.944728 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:17.960472 kubelet[2750]: E0905 00:38:17.960423 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:17.960472 kubelet[2750]: W0905 00:38:17.960465 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:17.960642 kubelet[2750]: E0905 00:38:17.960490 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:17.981228 kubelet[2750]: E0905 00:38:17.981178 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:17.982270 containerd[1557]: time="2025-09-05T00:38:17.982206474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4c756d-vxwzz,Uid:e5e6383c-00a6-4d2c-8f9c-69853ef73af7,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:18.015276 kubelet[2750]: E0905 00:38:18.015215 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.015276 kubelet[2750]: W0905 00:38:18.015259 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.015276 kubelet[2750]: E0905 00:38:18.015290 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.016317 kubelet[2750]: E0905 00:38:18.015725 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.016317 kubelet[2750]: W0905 00:38:18.015742 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.016317 kubelet[2750]: E0905 00:38:18.015752 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.019890 kubelet[2750]: E0905 00:38:18.018350 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.019890 kubelet[2750]: W0905 00:38:18.018367 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.019890 kubelet[2750]: E0905 00:38:18.018378 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.019890 kubelet[2750]: E0905 00:38:18.018675 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.019890 kubelet[2750]: W0905 00:38:18.018685 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.019890 kubelet[2750]: E0905 00:38:18.018694 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.019890 kubelet[2750]: E0905 00:38:18.018933 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.019890 kubelet[2750]: W0905 00:38:18.018941 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.019890 kubelet[2750]: E0905 00:38:18.018951 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.019890 kubelet[2750]: E0905 00:38:18.019180 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.020169 kubelet[2750]: W0905 00:38:18.019189 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.020169 kubelet[2750]: E0905 00:38:18.019202 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.020169 kubelet[2750]: E0905 00:38:18.019915 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.020169 kubelet[2750]: W0905 00:38:18.019926 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.020169 kubelet[2750]: E0905 00:38:18.019936 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.025094 kubelet[2750]: E0905 00:38:18.025052 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.025094 kubelet[2750]: W0905 00:38:18.025082 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.025094 kubelet[2750]: E0905 00:38:18.025100 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.025528 kubelet[2750]: E0905 00:38:18.025495 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.025563 kubelet[2750]: W0905 00:38:18.025527 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.025563 kubelet[2750]: E0905 00:38:18.025558 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.026962 kubelet[2750]: E0905 00:38:18.026938 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.026962 kubelet[2750]: W0905 00:38:18.026958 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.027039 kubelet[2750]: E0905 00:38:18.026972 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.027355 kubelet[2750]: E0905 00:38:18.027331 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.027355 kubelet[2750]: W0905 00:38:18.027346 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.027355 kubelet[2750]: E0905 00:38:18.027356 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.028022 kubelet[2750]: E0905 00:38:18.028004 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.028022 kubelet[2750]: W0905 00:38:18.028018 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.028098 kubelet[2750]: E0905 00:38:18.028028 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.029164 kubelet[2750]: E0905 00:38:18.029147 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.029164 kubelet[2750]: W0905 00:38:18.029160 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.029227 kubelet[2750]: E0905 00:38:18.029169 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.030801 kubelet[2750]: E0905 00:38:18.030781 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.030801 kubelet[2750]: W0905 00:38:18.030797 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.030856 kubelet[2750]: E0905 00:38:18.030808 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.031660 kubelet[2750]: E0905 00:38:18.031644 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.031660 kubelet[2750]: W0905 00:38:18.031656 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.031720 kubelet[2750]: E0905 00:38:18.031666 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.033397 kubelet[2750]: E0905 00:38:18.033375 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.033397 kubelet[2750]: W0905 00:38:18.033390 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.033467 kubelet[2750]: E0905 00:38:18.033400 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.033989 kubelet[2750]: E0905 00:38:18.033975 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.033989 kubelet[2750]: W0905 00:38:18.033988 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.034069 kubelet[2750]: E0905 00:38:18.033998 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.034212 kubelet[2750]: E0905 00:38:18.034197 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.034212 kubelet[2750]: W0905 00:38:18.034208 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.034267 kubelet[2750]: E0905 00:38:18.034216 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.034578 kubelet[2750]: E0905 00:38:18.034551 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.034578 kubelet[2750]: W0905 00:38:18.034564 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.034578 kubelet[2750]: E0905 00:38:18.034576 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.034854 kubelet[2750]: E0905 00:38:18.034836 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.034854 kubelet[2750]: W0905 00:38:18.034848 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.034854 kubelet[2750]: E0905 00:38:18.034857 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.035155 kubelet[2750]: E0905 00:38:18.035129 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.035155 kubelet[2750]: W0905 00:38:18.035141 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.035155 kubelet[2750]: E0905 00:38:18.035150 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.035431 kubelet[2750]: E0905 00:38:18.035408 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.035431 kubelet[2750]: W0905 00:38:18.035421 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.035431 kubelet[2750]: E0905 00:38:18.035430 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.035735 kubelet[2750]: E0905 00:38:18.035674 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.035735 kubelet[2750]: W0905 00:38:18.035684 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.035735 kubelet[2750]: E0905 00:38:18.035692 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.035973 kubelet[2750]: E0905 00:38:18.035918 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.035973 kubelet[2750]: W0905 00:38:18.035927 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.035973 kubelet[2750]: E0905 00:38:18.035935 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.036150 kubelet[2750]: E0905 00:38:18.036127 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.036150 kubelet[2750]: W0905 00:38:18.036138 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.036150 kubelet[2750]: E0905 00:38:18.036146 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.042621 containerd[1557]: time="2025-09-05T00:38:18.042540050Z" level=info msg="connecting to shim 573cb47356f9ec8ca79374a794e03391611e9da1a1970d55f5adf51d90665dcc" address="unix:///run/containerd/s/98734928f3573f09da72ee6776d6ec10881f1f655b71a84a93ef78bcd9fdaa18" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:18.047889 containerd[1557]: time="2025-09-05T00:38:18.047482603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdz7w,Uid:7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:18.050364 kubelet[2750]: E0905 00:38:18.050324 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:18.050364 kubelet[2750]: W0905 00:38:18.050355 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:18.050475 kubelet[2750]: E0905 00:38:18.050379 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:18.074298 containerd[1557]: time="2025-09-05T00:38:18.074229316Z" level=info msg="connecting to shim 0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02" address="unix:///run/containerd/s/a3164b3e044229cb16d85d5029b01270d4d81f01f13c0736db91b92fa0be101e" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:18.076306 systemd[1]: Started cri-containerd-573cb47356f9ec8ca79374a794e03391611e9da1a1970d55f5adf51d90665dcc.scope - libcontainer container 573cb47356f9ec8ca79374a794e03391611e9da1a1970d55f5adf51d90665dcc. Sep 5 00:38:18.105080 systemd[1]: Started cri-containerd-0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02.scope - libcontainer container 0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02. Sep 5 00:38:18.137960 containerd[1557]: time="2025-09-05T00:38:18.137810352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4c756d-vxwzz,Uid:e5e6383c-00a6-4d2c-8f9c-69853ef73af7,Namespace:calico-system,Attempt:0,} returns sandbox id \"573cb47356f9ec8ca79374a794e03391611e9da1a1970d55f5adf51d90665dcc\"" Sep 5 00:38:18.139255 kubelet[2750]: E0905 00:38:18.139222 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:18.142376 containerd[1557]: time="2025-09-05T00:38:18.142333196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:38:18.143932 containerd[1557]: time="2025-09-05T00:38:18.143880847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdz7w,Uid:7ea80397-a5dc-4227-b7aa-6c6d7a4aa60b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02\"" Sep 5 00:38:19.468730 kubelet[2750]: E0905 00:38:19.468674 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:21.465480 kubelet[2750]: E0905 00:38:21.465394 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:21.961128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527276943.mount: Deactivated successfully. Sep 5 00:38:22.492019 containerd[1557]: time="2025-09-05T00:38:22.491959406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:22.494461 containerd[1557]: time="2025-09-05T00:38:22.493900384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:38:22.495741 containerd[1557]: time="2025-09-05T00:38:22.495691149Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:22.527953 containerd[1557]: time="2025-09-05T00:38:22.527798556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:22.529086 containerd[1557]: time="2025-09-05T00:38:22.529014220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.386625039s" Sep 5 00:38:22.530353 containerd[1557]: time="2025-09-05T00:38:22.529093429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:38:22.530986 containerd[1557]: time="2025-09-05T00:38:22.530951631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:38:22.548659 containerd[1557]: time="2025-09-05T00:38:22.548607851Z" level=info msg="CreateContainer within sandbox \"573cb47356f9ec8ca79374a794e03391611e9da1a1970d55f5adf51d90665dcc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:38:22.559838 containerd[1557]: time="2025-09-05T00:38:22.559755019Z" level=info msg="Container fcc36733cced19bb64a4dbb9fafcf41a95b49567014c195d6c90fbe74a1a2f2d: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:22.573497 containerd[1557]: time="2025-09-05T00:38:22.573407555Z" level=info msg="CreateContainer within sandbox \"573cb47356f9ec8ca79374a794e03391611e9da1a1970d55f5adf51d90665dcc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fcc36733cced19bb64a4dbb9fafcf41a95b49567014c195d6c90fbe74a1a2f2d\"" Sep 5 00:38:22.575178 containerd[1557]: time="2025-09-05T00:38:22.575116197Z" level=info msg="StartContainer for \"fcc36733cced19bb64a4dbb9fafcf41a95b49567014c195d6c90fbe74a1a2f2d\"" Sep 5 00:38:22.576474 containerd[1557]: time="2025-09-05T00:38:22.576440034Z" level=info msg="connecting to shim fcc36733cced19bb64a4dbb9fafcf41a95b49567014c195d6c90fbe74a1a2f2d" address="unix:///run/containerd/s/98734928f3573f09da72ee6776d6ec10881f1f655b71a84a93ef78bcd9fdaa18" protocol=ttrpc version=3 Sep 5 00:38:22.617154 systemd[1]: Started cri-containerd-fcc36733cced19bb64a4dbb9fafcf41a95b49567014c195d6c90fbe74a1a2f2d.scope - libcontainer container fcc36733cced19bb64a4dbb9fafcf41a95b49567014c195d6c90fbe74a1a2f2d. Sep 5 00:38:22.699365 containerd[1557]: time="2025-09-05T00:38:22.699284707Z" level=info msg="StartContainer for \"fcc36733cced19bb64a4dbb9fafcf41a95b49567014c195d6c90fbe74a1a2f2d\" returns successfully" Sep 5 00:38:23.465268 kubelet[2750]: E0905 00:38:23.465213 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:23.552595 kubelet[2750]: E0905 00:38:23.552526 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:23.630783 kubelet[2750]: E0905 00:38:23.630742 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.630783 kubelet[2750]: W0905 00:38:23.630765 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.630783 kubelet[2750]: E0905 00:38:23.630789 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.631133 kubelet[2750]: E0905 00:38:23.631100 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.631176 kubelet[2750]: W0905 00:38:23.631129 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.631176 kubelet[2750]: E0905 00:38:23.631159 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.631485 kubelet[2750]: E0905 00:38:23.631468 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.631485 kubelet[2750]: W0905 00:38:23.631479 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.631554 kubelet[2750]: E0905 00:38:23.631489 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.631736 kubelet[2750]: E0905 00:38:23.631706 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.631736 kubelet[2750]: W0905 00:38:23.631728 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.631736 kubelet[2750]: E0905 00:38:23.631737 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.631948 kubelet[2750]: E0905 00:38:23.631931 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.631948 kubelet[2750]: W0905 00:38:23.631942 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.631948 kubelet[2750]: E0905 00:38:23.631950 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.632129 kubelet[2750]: E0905 00:38:23.632110 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.632129 kubelet[2750]: W0905 00:38:23.632120 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.632129 kubelet[2750]: E0905 00:38:23.632127 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.632326 kubelet[2750]: E0905 00:38:23.632296 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.632326 kubelet[2750]: W0905 00:38:23.632315 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.632326 kubelet[2750]: E0905 00:38:23.632323 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.632506 kubelet[2750]: E0905 00:38:23.632488 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.632506 kubelet[2750]: W0905 00:38:23.632498 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.632506 kubelet[2750]: E0905 00:38:23.632506 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.632695 kubelet[2750]: E0905 00:38:23.632678 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.632695 kubelet[2750]: W0905 00:38:23.632688 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.632695 kubelet[2750]: E0905 00:38:23.632695 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.632895 kubelet[2750]: E0905 00:38:23.632848 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.632895 kubelet[2750]: W0905 00:38:23.632860 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.632895 kubelet[2750]: E0905 00:38:23.632882 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.633060 kubelet[2750]: E0905 00:38:23.633041 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.633060 kubelet[2750]: W0905 00:38:23.633052 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.633060 kubelet[2750]: E0905 00:38:23.633062 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.633230 kubelet[2750]: E0905 00:38:23.633212 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.633230 kubelet[2750]: W0905 00:38:23.633223 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.633230 kubelet[2750]: E0905 00:38:23.633230 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.633421 kubelet[2750]: E0905 00:38:23.633406 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.633421 kubelet[2750]: W0905 00:38:23.633416 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.633477 kubelet[2750]: E0905 00:38:23.633423 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.633596 kubelet[2750]: E0905 00:38:23.633578 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.633596 kubelet[2750]: W0905 00:38:23.633588 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.633596 kubelet[2750]: E0905 00:38:23.633596 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.633757 kubelet[2750]: E0905 00:38:23.633741 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.633757 kubelet[2750]: W0905 00:38:23.633751 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.633832 kubelet[2750]: E0905 00:38:23.633759 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.667709 kubelet[2750]: E0905 00:38:23.667649 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.667709 kubelet[2750]: W0905 00:38:23.667685 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.667709 kubelet[2750]: E0905 00:38:23.667713 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.668032 kubelet[2750]: E0905 00:38:23.668002 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.668032 kubelet[2750]: W0905 00:38:23.668017 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.668032 kubelet[2750]: E0905 00:38:23.668028 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.668499 kubelet[2750]: E0905 00:38:23.668449 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.668537 kubelet[2750]: W0905 00:38:23.668491 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.668562 kubelet[2750]: E0905 00:38:23.668531 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.669061 kubelet[2750]: E0905 00:38:23.669013 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.669061 kubelet[2750]: W0905 00:38:23.669050 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.669139 kubelet[2750]: E0905 00:38:23.669077 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.669346 kubelet[2750]: E0905 00:38:23.669319 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.669346 kubelet[2750]: W0905 00:38:23.669331 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.669346 kubelet[2750]: E0905 00:38:23.669340 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.669684 kubelet[2750]: E0905 00:38:23.669627 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.669684 kubelet[2750]: W0905 00:38:23.669658 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.669684 kubelet[2750]: E0905 00:38:23.669691 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.670051 kubelet[2750]: E0905 00:38:23.670024 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.670051 kubelet[2750]: W0905 00:38:23.670033 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.670051 kubelet[2750]: E0905 00:38:23.670042 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.670234 kubelet[2750]: E0905 00:38:23.670216 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.670234 kubelet[2750]: W0905 00:38:23.670227 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.670234 kubelet[2750]: E0905 00:38:23.670234 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.670508 kubelet[2750]: E0905 00:38:23.670482 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.670508 kubelet[2750]: W0905 00:38:23.670494 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.670508 kubelet[2750]: E0905 00:38:23.670505 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.670807 kubelet[2750]: E0905 00:38:23.670779 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.670807 kubelet[2750]: W0905 00:38:23.670796 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.670807 kubelet[2750]: E0905 00:38:23.670807 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.671029 kubelet[2750]: E0905 00:38:23.671012 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.671029 kubelet[2750]: W0905 00:38:23.671023 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.671029 kubelet[2750]: E0905 00:38:23.671031 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.671264 kubelet[2750]: E0905 00:38:23.671244 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.671264 kubelet[2750]: W0905 00:38:23.671257 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.671345 kubelet[2750]: E0905 00:38:23.671270 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.671485 kubelet[2750]: E0905 00:38:23.671467 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.671485 kubelet[2750]: W0905 00:38:23.671478 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.671485 kubelet[2750]: E0905 00:38:23.671486 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.671676 kubelet[2750]: E0905 00:38:23.671659 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.671676 kubelet[2750]: W0905 00:38:23.671670 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.671676 kubelet[2750]: E0905 00:38:23.671679 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.671913 kubelet[2750]: E0905 00:38:23.671895 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.671913 kubelet[2750]: W0905 00:38:23.671906 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.671913 kubelet[2750]: E0905 00:38:23.671914 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.672125 kubelet[2750]: E0905 00:38:23.672108 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.672125 kubelet[2750]: W0905 00:38:23.672119 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.672197 kubelet[2750]: E0905 00:38:23.672128 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.672385 kubelet[2750]: E0905 00:38:23.672365 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.672385 kubelet[2750]: W0905 00:38:23.672379 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.672453 kubelet[2750]: E0905 00:38:23.672391 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.672954 kubelet[2750]: E0905 00:38:23.672931 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:23.672954 kubelet[2750]: W0905 00:38:23.672943 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:23.672954 kubelet[2750]: E0905 00:38:23.672952 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:23.743341 kubelet[2750]: I0905 00:38:23.743259 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f4c756d-vxwzz" podStartSLOduration=2.354338276 podStartE2EDuration="6.742820235s" podCreationTimestamp="2025-09-05 00:38:17 +0000 UTC" firstStartedPulling="2025-09-05 00:38:18.141960285 +0000 UTC m=+24.771994533" lastFinishedPulling="2025-09-05 00:38:22.530442253 +0000 UTC m=+29.160476492" observedRunningTime="2025-09-05 00:38:23.741691765 +0000 UTC m=+30.371726003" watchObservedRunningTime="2025-09-05 00:38:23.742820235 +0000 UTC m=+30.372854474" Sep 5 00:38:24.553523 kubelet[2750]: I0905 00:38:24.553468 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:38:24.554052 kubelet[2750]: E0905 00:38:24.553908 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:24.639440 kubelet[2750]: E0905 00:38:24.639406 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.639440 kubelet[2750]: W0905 00:38:24.639429 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.639595 kubelet[2750]: E0905 00:38:24.639451 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.639688 kubelet[2750]: E0905 00:38:24.639665 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.639688 kubelet[2750]: W0905 00:38:24.639679 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.639756 kubelet[2750]: E0905 00:38:24.639691 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.639934 kubelet[2750]: E0905 00:38:24.639915 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.639934 kubelet[2750]: W0905 00:38:24.639927 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.640008 kubelet[2750]: E0905 00:38:24.639937 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.640151 kubelet[2750]: E0905 00:38:24.640134 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.640151 kubelet[2750]: W0905 00:38:24.640146 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.640205 kubelet[2750]: E0905 00:38:24.640156 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.640373 kubelet[2750]: E0905 00:38:24.640349 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.640373 kubelet[2750]: W0905 00:38:24.640363 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.640424 kubelet[2750]: E0905 00:38:24.640373 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.640558 kubelet[2750]: E0905 00:38:24.640542 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.640558 kubelet[2750]: W0905 00:38:24.640554 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.640608 kubelet[2750]: E0905 00:38:24.640563 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.640748 kubelet[2750]: E0905 00:38:24.640732 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.640748 kubelet[2750]: W0905 00:38:24.640744 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.640797 kubelet[2750]: E0905 00:38:24.640756 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.640963 kubelet[2750]: E0905 00:38:24.640946 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.640963 kubelet[2750]: W0905 00:38:24.640958 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.641028 kubelet[2750]: E0905 00:38:24.640968 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.641210 kubelet[2750]: E0905 00:38:24.641169 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.641210 kubelet[2750]: W0905 00:38:24.641180 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.641210 kubelet[2750]: E0905 00:38:24.641189 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.641389 kubelet[2750]: E0905 00:38:24.641372 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.641389 kubelet[2750]: W0905 00:38:24.641384 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.641432 kubelet[2750]: E0905 00:38:24.641394 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.641591 kubelet[2750]: E0905 00:38:24.641573 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.641591 kubelet[2750]: W0905 00:38:24.641584 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.641636 kubelet[2750]: E0905 00:38:24.641594 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.641783 kubelet[2750]: E0905 00:38:24.641767 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.641783 kubelet[2750]: W0905 00:38:24.641778 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.641824 kubelet[2750]: E0905 00:38:24.641789 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.642004 kubelet[2750]: E0905 00:38:24.641986 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.642004 kubelet[2750]: W0905 00:38:24.641998 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.642061 kubelet[2750]: E0905 00:38:24.642008 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.642203 kubelet[2750]: E0905 00:38:24.642187 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.642203 kubelet[2750]: W0905 00:38:24.642198 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.642244 kubelet[2750]: E0905 00:38:24.642208 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.642411 kubelet[2750]: E0905 00:38:24.642394 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.642411 kubelet[2750]: W0905 00:38:24.642406 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.642459 kubelet[2750]: E0905 00:38:24.642415 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.675999 kubelet[2750]: E0905 00:38:24.675965 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.675999 kubelet[2750]: W0905 00:38:24.675988 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.675999 kubelet[2750]: E0905 00:38:24.676010 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.676245 kubelet[2750]: E0905 00:38:24.676229 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.676245 kubelet[2750]: W0905 00:38:24.676242 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.676314 kubelet[2750]: E0905 00:38:24.676254 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.676493 kubelet[2750]: E0905 00:38:24.676471 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.676493 kubelet[2750]: W0905 00:38:24.676485 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.676540 kubelet[2750]: E0905 00:38:24.676495 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.676732 kubelet[2750]: E0905 00:38:24.676718 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.676732 kubelet[2750]: W0905 00:38:24.676730 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.676793 kubelet[2750]: E0905 00:38:24.676740 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.676946 kubelet[2750]: E0905 00:38:24.676933 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.676946 kubelet[2750]: W0905 00:38:24.676942 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.677016 kubelet[2750]: E0905 00:38:24.676950 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.677117 kubelet[2750]: E0905 00:38:24.677102 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.677117 kubelet[2750]: W0905 00:38:24.677114 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.677159 kubelet[2750]: E0905 00:38:24.677133 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.677354 kubelet[2750]: E0905 00:38:24.677337 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.677354 kubelet[2750]: W0905 00:38:24.677349 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.677444 kubelet[2750]: E0905 00:38:24.677360 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.677674 kubelet[2750]: E0905 00:38:24.677656 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.677674 kubelet[2750]: W0905 00:38:24.677670 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.677765 kubelet[2750]: E0905 00:38:24.677682 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.677947 kubelet[2750]: E0905 00:38:24.677927 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.677947 kubelet[2750]: W0905 00:38:24.677942 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.678016 kubelet[2750]: E0905 00:38:24.677954 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.678165 kubelet[2750]: E0905 00:38:24.678150 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.678165 kubelet[2750]: W0905 00:38:24.678162 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.678224 kubelet[2750]: E0905 00:38:24.678172 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.678385 kubelet[2750]: E0905 00:38:24.678371 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.678385 kubelet[2750]: W0905 00:38:24.678382 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.678442 kubelet[2750]: E0905 00:38:24.678392 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.678602 kubelet[2750]: E0905 00:38:24.678588 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.678602 kubelet[2750]: W0905 00:38:24.678600 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.678650 kubelet[2750]: E0905 00:38:24.678609 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.678823 kubelet[2750]: E0905 00:38:24.678810 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.678845 kubelet[2750]: W0905 00:38:24.678821 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.678845 kubelet[2750]: E0905 00:38:24.678831 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.679099 kubelet[2750]: E0905 00:38:24.679084 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.679099 kubelet[2750]: W0905 00:38:24.679098 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.679168 kubelet[2750]: E0905 00:38:24.679109 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.679313 kubelet[2750]: E0905 00:38:24.679298 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.679313 kubelet[2750]: W0905 00:38:24.679310 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.679365 kubelet[2750]: E0905 00:38:24.679320 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.679537 kubelet[2750]: E0905 00:38:24.679524 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.679566 kubelet[2750]: W0905 00:38:24.679535 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.679566 kubelet[2750]: E0905 00:38:24.679546 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.679784 kubelet[2750]: E0905 00:38:24.679768 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.679784 kubelet[2750]: W0905 00:38:24.679782 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.679834 kubelet[2750]: E0905 00:38:24.679793 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:24.680016 kubelet[2750]: E0905 00:38:24.680003 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:38:24.680052 kubelet[2750]: W0905 00:38:24.680014 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:38:24.680052 kubelet[2750]: E0905 00:38:24.680026 2750 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:38:25.465213 kubelet[2750]: E0905 00:38:25.465137 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:26.940713 containerd[1557]: time="2025-09-05T00:38:26.940625211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:27.003790 containerd[1557]: time="2025-09-05T00:38:27.003679718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:38:27.034589 containerd[1557]: time="2025-09-05T00:38:27.034511846Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:27.086371 containerd[1557]: time="2025-09-05T00:38:27.086250840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:27.087151 containerd[1557]: time="2025-09-05T00:38:27.087082221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 4.556091466s" Sep 5 00:38:27.087151 containerd[1557]: time="2025-09-05T00:38:27.087146151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:38:27.213783 containerd[1557]: time="2025-09-05T00:38:27.213663562Z" level=info msg="CreateContainer within sandbox \"0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:38:27.465629 kubelet[2750]: E0905 00:38:27.465455 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:27.499905 containerd[1557]: time="2025-09-05T00:38:27.497075052Z" level=info msg="Container dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:27.844238 containerd[1557]: time="2025-09-05T00:38:27.844176134Z" level=info msg="CreateContainer within sandbox \"0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea\"" Sep 5 00:38:27.844748 containerd[1557]: time="2025-09-05T00:38:27.844708232Z" level=info msg="StartContainer for \"dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea\"" Sep 5 00:38:27.846504 containerd[1557]: time="2025-09-05T00:38:27.846473758Z" level=info msg="connecting to shim dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea" address="unix:///run/containerd/s/a3164b3e044229cb16d85d5029b01270d4d81f01f13c0736db91b92fa0be101e" protocol=ttrpc version=3 Sep 5 00:38:27.877047 systemd[1]: Started cri-containerd-dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea.scope - libcontainer container dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea. Sep 5 00:38:27.952228 systemd[1]: cri-containerd-dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea.scope: Deactivated successfully. Sep 5 00:38:27.956041 containerd[1557]: time="2025-09-05T00:38:27.955999265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea\" id:\"dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea\" pid:3444 exited_at:{seconds:1757032707 nanos:955494557}" Sep 5 00:38:28.130056 containerd[1557]: time="2025-09-05T00:38:28.129918510Z" level=info msg="received exit event container_id:\"dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea\" id:\"dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea\" pid:3444 exited_at:{seconds:1757032707 nanos:955494557}" Sep 5 00:38:28.140382 containerd[1557]: time="2025-09-05T00:38:28.140303033Z" level=info msg="StartContainer for \"dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea\" returns successfully" Sep 5 00:38:28.160432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc6eaf612a102dd55301db30d289f8d9e613c8a76bb103a444ab194dbc7b61ea-rootfs.mount: Deactivated successfully. Sep 5 00:38:29.466037 kubelet[2750]: E0905 00:38:29.465953 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:30.573367 containerd[1557]: time="2025-09-05T00:38:30.573312435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:38:31.465843 kubelet[2750]: E0905 00:38:31.465760 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:33.465894 kubelet[2750]: E0905 00:38:33.465771 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:35.465608 kubelet[2750]: E0905 00:38:35.465540 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:36.099258 containerd[1557]: time="2025-09-05T00:38:36.099165190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:36.101556 containerd[1557]: time="2025-09-05T00:38:36.101508799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:38:36.102910 containerd[1557]: time="2025-09-05T00:38:36.102846980Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:36.105668 containerd[1557]: time="2025-09-05T00:38:36.105595098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:36.106401 containerd[1557]: time="2025-09-05T00:38:36.106356998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.532989369s" Sep 5 00:38:36.106476 containerd[1557]: time="2025-09-05T00:38:36.106397684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:38:36.115306 containerd[1557]: time="2025-09-05T00:38:36.115210756Z" level=info msg="CreateContainer within sandbox \"0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:38:36.131011 containerd[1557]: time="2025-09-05T00:38:36.130927375Z" level=info msg="Container 320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:36.143495 containerd[1557]: time="2025-09-05T00:38:36.143426424Z" level=info msg="CreateContainer within sandbox \"0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f\"" Sep 5 00:38:36.144239 containerd[1557]: time="2025-09-05T00:38:36.144184677Z" level=info msg="StartContainer for \"320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f\"" Sep 5 00:38:36.145950 containerd[1557]: time="2025-09-05T00:38:36.145915656Z" level=info msg="connecting to shim 320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f" address="unix:///run/containerd/s/a3164b3e044229cb16d85d5029b01270d4d81f01f13c0736db91b92fa0be101e" protocol=ttrpc version=3 Sep 5 00:38:36.175219 systemd[1]: Started cri-containerd-320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f.scope - libcontainer container 320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f. Sep 5 00:38:36.230860 containerd[1557]: time="2025-09-05T00:38:36.230809516Z" level=info msg="StartContainer for \"320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f\" returns successfully" Sep 5 00:38:37.306567 systemd[1]: cri-containerd-320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f.scope: Deactivated successfully. Sep 5 00:38:37.307005 systemd[1]: cri-containerd-320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f.scope: Consumed 624ms CPU time, 177.6M memory peak, 3M read from disk, 171.3M written to disk. Sep 5 00:38:37.309626 containerd[1557]: time="2025-09-05T00:38:37.309571741Z" level=info msg="received exit event container_id:\"320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f\" id:\"320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f\" pid:3506 exited_at:{seconds:1757032717 nanos:309297085}" Sep 5 00:38:37.309975 containerd[1557]: time="2025-09-05T00:38:37.309723385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f\" id:\"320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f\" pid:3506 exited_at:{seconds:1757032717 nanos:309297085}" Sep 5 00:38:37.333697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-320a932e3813fa3d05106cbaf49e8219f2f55f82e29e083a2502e1486e54312f-rootfs.mount: Deactivated successfully. Sep 5 00:38:37.394414 kubelet[2750]: I0905 00:38:37.394375 2750 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:38:37.471434 systemd[1]: Created slice kubepods-besteffort-pod3d2da156_f5da_43b6_8661_d14dd051f3ef.slice - libcontainer container kubepods-besteffort-pod3d2da156_f5da_43b6_8661_d14dd051f3ef.slice. Sep 5 00:38:37.623363 containerd[1557]: time="2025-09-05T00:38:37.623210425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhkp8,Uid:3d2da156-f5da-43b6-8661-d14dd051f3ef,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:38.033276 systemd[1]: Created slice kubepods-besteffort-podab9d8dd2_569f_46bb_90a7_db0dbd3069b5.slice - libcontainer container kubepods-besteffort-podab9d8dd2_569f_46bb_90a7_db0dbd3069b5.slice. Sep 5 00:38:38.046375 systemd[1]: Created slice kubepods-besteffort-pod0b3f380c_3b71_4b6b_b407_7dc3faecece7.slice - libcontainer container kubepods-besteffort-pod0b3f380c_3b71_4b6b_b407_7dc3faecece7.slice. Sep 5 00:38:38.061999 systemd[1]: Created slice kubepods-burstable-pod9be9c6bc_d4fa_4355_af97_766dd6a9dd95.slice - libcontainer container kubepods-burstable-pod9be9c6bc_d4fa_4355_af97_766dd6a9dd95.slice. Sep 5 00:38:38.069794 kubelet[2750]: I0905 00:38:38.068775 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab9d8dd2-569f-46bb-90a7-db0dbd3069b5-tigera-ca-bundle\") pod \"calico-kube-controllers-86d758f56b-zdnzt\" (UID: \"ab9d8dd2-569f-46bb-90a7-db0dbd3069b5\") " pod="calico-system/calico-kube-controllers-86d758f56b-zdnzt" Sep 5 00:38:38.069794 kubelet[2750]: I0905 00:38:38.068818 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfc4w\" (UniqueName: \"kubernetes.io/projected/ab9d8dd2-569f-46bb-90a7-db0dbd3069b5-kube-api-access-mfc4w\") pod \"calico-kube-controllers-86d758f56b-zdnzt\" (UID: \"ab9d8dd2-569f-46bb-90a7-db0dbd3069b5\") " pod="calico-system/calico-kube-controllers-86d758f56b-zdnzt" Sep 5 00:38:38.069794 kubelet[2750]: I0905 00:38:38.068881 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b3f380c-3b71-4b6b-b407-7dc3faecece7-calico-apiserver-certs\") pod \"calico-apiserver-698d6f7d76-9qbb6\" (UID: \"0b3f380c-3b71-4b6b-b407-7dc3faecece7\") " pod="calico-apiserver/calico-apiserver-698d6f7d76-9qbb6" Sep 5 00:38:38.069794 kubelet[2750]: I0905 00:38:38.068904 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfp8\" (UniqueName: \"kubernetes.io/projected/0b3f380c-3b71-4b6b-b407-7dc3faecece7-kube-api-access-6bfp8\") pod \"calico-apiserver-698d6f7d76-9qbb6\" (UID: \"0b3f380c-3b71-4b6b-b407-7dc3faecece7\") " pod="calico-apiserver/calico-apiserver-698d6f7d76-9qbb6" Sep 5 00:38:38.078347 systemd[1]: Created slice kubepods-besteffort-podc0127756_c384_4744_b352_fbc5dbe7777e.slice - libcontainer container kubepods-besteffort-podc0127756_c384_4744_b352_fbc5dbe7777e.slice. Sep 5 00:38:38.089143 systemd[1]: Created slice kubepods-besteffort-pod3dd04056_bdde_4727_9056_bfa7c50d6ac8.slice - libcontainer container kubepods-besteffort-pod3dd04056_bdde_4727_9056_bfa7c50d6ac8.slice. Sep 5 00:38:38.104184 systemd[1]: Created slice kubepods-burstable-pod3a81d938_f803_44e7_bad4_ecbbbed1be77.slice - libcontainer container kubepods-burstable-pod3a81d938_f803_44e7_bad4_ecbbbed1be77.slice. Sep 5 00:38:38.113030 systemd[1]: Created slice kubepods-besteffort-podc8330416_e46c_496b_9770_5e235046f8d7.slice - libcontainer container kubepods-besteffort-podc8330416_e46c_496b_9770_5e235046f8d7.slice. Sep 5 00:38:38.162368 containerd[1557]: time="2025-09-05T00:38:38.162284679Z" level=error msg="Failed to destroy network for sandbox \"6ad353134884c585e7a707e36746d5bbeb6861442adec33bb1eac97760ad4016\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.165963 containerd[1557]: time="2025-09-05T00:38:38.163956607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhkp8,Uid:3d2da156-f5da-43b6-8661-d14dd051f3ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad353134884c585e7a707e36746d5bbeb6861442adec33bb1eac97760ad4016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.166480 systemd[1]: run-netns-cni\x2da6efdd59\x2d9f9c\x2d6d7f\x2d0870\x2de890a3f6b43a.mount: Deactivated successfully. Sep 5 00:38:38.169363 kubelet[2750]: I0905 00:38:38.169305 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9be9c6bc-d4fa-4355-af97-766dd6a9dd95-config-volume\") pod \"coredns-674b8bbfcf-b4zsj\" (UID: \"9be9c6bc-d4fa-4355-af97-766dd6a9dd95\") " pod="kube-system/coredns-674b8bbfcf-b4zsj" Sep 5 00:38:38.169483 kubelet[2750]: I0905 00:38:38.169398 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55p9r\" (UniqueName: \"kubernetes.io/projected/3dd04056-bdde-4727-9056-bfa7c50d6ac8-kube-api-access-55p9r\") pod \"calico-apiserver-698d6f7d76-wmjmq\" (UID: \"3dd04056-bdde-4727-9056-bfa7c50d6ac8\") " pod="calico-apiserver/calico-apiserver-698d6f7d76-wmjmq" Sep 5 00:38:38.169483 kubelet[2750]: I0905 00:38:38.169425 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0127756-c384-4744-b352-fbc5dbe7777e-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-hc2pv\" (UID: \"c0127756-c384-4744-b352-fbc5dbe7777e\") " pod="calico-system/goldmane-54d579b49d-hc2pv" Sep 5 00:38:38.169483 kubelet[2750]: I0905 00:38:38.169448 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr2p\" (UniqueName: \"kubernetes.io/projected/c0127756-c384-4744-b352-fbc5dbe7777e-kube-api-access-4gr2p\") pod \"goldmane-54d579b49d-hc2pv\" (UID: \"c0127756-c384-4744-b352-fbc5dbe7777e\") " pod="calico-system/goldmane-54d579b49d-hc2pv" Sep 5 00:38:38.169675 kubelet[2750]: I0905 00:38:38.169637 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrsd\" (UniqueName: \"kubernetes.io/projected/9be9c6bc-d4fa-4355-af97-766dd6a9dd95-kube-api-access-hkrsd\") pod \"coredns-674b8bbfcf-b4zsj\" (UID: \"9be9c6bc-d4fa-4355-af97-766dd6a9dd95\") " pod="kube-system/coredns-674b8bbfcf-b4zsj" Sep 5 00:38:38.169717 kubelet[2750]: I0905 00:38:38.169679 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8330416-e46c-496b-9770-5e235046f8d7-whisker-backend-key-pair\") pod \"whisker-7ff7c6455b-rdkxh\" (UID: \"c8330416-e46c-496b-9770-5e235046f8d7\") " pod="calico-system/whisker-7ff7c6455b-rdkxh" Sep 5 00:38:38.169752 kubelet[2750]: I0905 00:38:38.169714 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3dd04056-bdde-4727-9056-bfa7c50d6ac8-calico-apiserver-certs\") pod \"calico-apiserver-698d6f7d76-wmjmq\" (UID: \"3dd04056-bdde-4727-9056-bfa7c50d6ac8\") " pod="calico-apiserver/calico-apiserver-698d6f7d76-wmjmq" Sep 5 00:38:38.169752 kubelet[2750]: I0905 00:38:38.169737 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a81d938-f803-44e7-bad4-ecbbbed1be77-config-volume\") pod \"coredns-674b8bbfcf-b2q5c\" (UID: \"3a81d938-f803-44e7-bad4-ecbbbed1be77\") " pod="kube-system/coredns-674b8bbfcf-b2q5c" Sep 5 00:38:38.169822 kubelet[2750]: I0905 00:38:38.169758 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8330416-e46c-496b-9770-5e235046f8d7-whisker-ca-bundle\") pod \"whisker-7ff7c6455b-rdkxh\" (UID: \"c8330416-e46c-496b-9770-5e235046f8d7\") " pod="calico-system/whisker-7ff7c6455b-rdkxh" Sep 5 00:38:38.169822 kubelet[2750]: I0905 00:38:38.169779 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0127756-c384-4744-b352-fbc5dbe7777e-config\") pod \"goldmane-54d579b49d-hc2pv\" (UID: \"c0127756-c384-4744-b352-fbc5dbe7777e\") " pod="calico-system/goldmane-54d579b49d-hc2pv" Sep 5 00:38:38.169822 kubelet[2750]: I0905 00:38:38.169800 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmbk6\" (UniqueName: \"kubernetes.io/projected/3a81d938-f803-44e7-bad4-ecbbbed1be77-kube-api-access-mmbk6\") pod \"coredns-674b8bbfcf-b2q5c\" (UID: \"3a81d938-f803-44e7-bad4-ecbbbed1be77\") " pod="kube-system/coredns-674b8bbfcf-b2q5c" Sep 5 00:38:38.169927 kubelet[2750]: I0905 00:38:38.169821 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpb9c\" (UniqueName: \"kubernetes.io/projected/c8330416-e46c-496b-9770-5e235046f8d7-kube-api-access-dpb9c\") pod \"whisker-7ff7c6455b-rdkxh\" (UID: \"c8330416-e46c-496b-9770-5e235046f8d7\") " pod="calico-system/whisker-7ff7c6455b-rdkxh" Sep 5 00:38:38.169927 kubelet[2750]: I0905 00:38:38.169855 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c0127756-c384-4744-b352-fbc5dbe7777e-goldmane-key-pair\") pod \"goldmane-54d579b49d-hc2pv\" (UID: \"c0127756-c384-4744-b352-fbc5dbe7777e\") " pod="calico-system/goldmane-54d579b49d-hc2pv" Sep 5 00:38:38.173896 kubelet[2750]: E0905 00:38:38.172216 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad353134884c585e7a707e36746d5bbeb6861442adec33bb1eac97760ad4016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.173896 kubelet[2750]: E0905 00:38:38.172305 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad353134884c585e7a707e36746d5bbeb6861442adec33bb1eac97760ad4016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vhkp8" Sep 5 00:38:38.173896 kubelet[2750]: E0905 00:38:38.172328 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad353134884c585e7a707e36746d5bbeb6861442adec33bb1eac97760ad4016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vhkp8" Sep 5 00:38:38.174120 kubelet[2750]: E0905 00:38:38.172369 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vhkp8_calico-system(3d2da156-f5da-43b6-8661-d14dd051f3ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vhkp8_calico-system(3d2da156-f5da-43b6-8661-d14dd051f3ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ad353134884c585e7a707e36746d5bbeb6861442adec33bb1eac97760ad4016\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vhkp8" podUID="3d2da156-f5da-43b6-8661-d14dd051f3ef" Sep 5 00:38:38.340084 containerd[1557]: time="2025-09-05T00:38:38.339963385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d758f56b-zdnzt,Uid:ab9d8dd2-569f-46bb-90a7-db0dbd3069b5,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:38.361942 containerd[1557]: time="2025-09-05T00:38:38.361880858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-9qbb6,Uid:0b3f380c-3b71-4b6b-b407-7dc3faecece7,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:38:38.369600 kubelet[2750]: E0905 00:38:38.369551 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:38.372653 containerd[1557]: time="2025-09-05T00:38:38.372603181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4zsj,Uid:9be9c6bc-d4fa-4355-af97-766dd6a9dd95,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:38.383674 containerd[1557]: time="2025-09-05T00:38:38.383621439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hc2pv,Uid:c0127756-c384-4744-b352-fbc5dbe7777e,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:38.397436 containerd[1557]: time="2025-09-05T00:38:38.397349804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-wmjmq,Uid:3dd04056-bdde-4727-9056-bfa7c50d6ac8,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:38:38.414181 containerd[1557]: time="2025-09-05T00:38:38.414045457Z" level=error msg="Failed to destroy network for sandbox \"be082ceb625b41c8a59b7bf91c89a29e3e560885693b54a958b8ddf975abed98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.416997 containerd[1557]: time="2025-09-05T00:38:38.416950689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d758f56b-zdnzt,Uid:ab9d8dd2-569f-46bb-90a7-db0dbd3069b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be082ceb625b41c8a59b7bf91c89a29e3e560885693b54a958b8ddf975abed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.417374 kubelet[2750]: E0905 00:38:38.417203 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be082ceb625b41c8a59b7bf91c89a29e3e560885693b54a958b8ddf975abed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.418175 kubelet[2750]: E0905 00:38:38.418021 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be082ceb625b41c8a59b7bf91c89a29e3e560885693b54a958b8ddf975abed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d758f56b-zdnzt" Sep 5 00:38:38.418175 kubelet[2750]: E0905 00:38:38.418059 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be082ceb625b41c8a59b7bf91c89a29e3e560885693b54a958b8ddf975abed98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d758f56b-zdnzt" Sep 5 00:38:38.418175 kubelet[2750]: E0905 00:38:38.418107 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d758f56b-zdnzt_calico-system(ab9d8dd2-569f-46bb-90a7-db0dbd3069b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d758f56b-zdnzt_calico-system(ab9d8dd2-569f-46bb-90a7-db0dbd3069b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be082ceb625b41c8a59b7bf91c89a29e3e560885693b54a958b8ddf975abed98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d758f56b-zdnzt" podUID="ab9d8dd2-569f-46bb-90a7-db0dbd3069b5" Sep 5 00:38:38.419142 kubelet[2750]: E0905 00:38:38.419120 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:38.422527 containerd[1557]: time="2025-09-05T00:38:38.422173411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2q5c,Uid:3a81d938-f803-44e7-bad4-ecbbbed1be77,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:38.423455 containerd[1557]: time="2025-09-05T00:38:38.422963204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff7c6455b-rdkxh,Uid:c8330416-e46c-496b-9770-5e235046f8d7,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:38.466990 containerd[1557]: time="2025-09-05T00:38:38.466934456Z" level=error msg="Failed to destroy network for sandbox \"a76aaaeae023591c6b9a61b4eacb60fae197e977258237d7b5aeb5235d07afff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.469573 containerd[1557]: time="2025-09-05T00:38:38.469440008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-9qbb6,Uid:0b3f380c-3b71-4b6b-b407-7dc3faecece7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76aaaeae023591c6b9a61b4eacb60fae197e977258237d7b5aeb5235d07afff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.469956 kubelet[2750]: E0905 00:38:38.469899 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76aaaeae023591c6b9a61b4eacb60fae197e977258237d7b5aeb5235d07afff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.470088 kubelet[2750]: E0905 00:38:38.469980 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76aaaeae023591c6b9a61b4eacb60fae197e977258237d7b5aeb5235d07afff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698d6f7d76-9qbb6" Sep 5 00:38:38.470088 kubelet[2750]: E0905 00:38:38.470005 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76aaaeae023591c6b9a61b4eacb60fae197e977258237d7b5aeb5235d07afff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698d6f7d76-9qbb6" Sep 5 00:38:38.470088 kubelet[2750]: E0905 00:38:38.470061 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698d6f7d76-9qbb6_calico-apiserver(0b3f380c-3b71-4b6b-b407-7dc3faecece7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698d6f7d76-9qbb6_calico-apiserver(0b3f380c-3b71-4b6b-b407-7dc3faecece7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a76aaaeae023591c6b9a61b4eacb60fae197e977258237d7b5aeb5235d07afff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698d6f7d76-9qbb6" podUID="0b3f380c-3b71-4b6b-b407-7dc3faecece7" Sep 5 00:38:38.481461 containerd[1557]: time="2025-09-05T00:38:38.481313532Z" level=error msg="Failed to destroy network for sandbox \"3db6ad7521d0b86395128479958bf33566ac3e5e2ab9eac3a14e3e2ddb5af7c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.484465 containerd[1557]: time="2025-09-05T00:38:38.484367503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4zsj,Uid:9be9c6bc-d4fa-4355-af97-766dd6a9dd95,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db6ad7521d0b86395128479958bf33566ac3e5e2ab9eac3a14e3e2ddb5af7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.484776 kubelet[2750]: E0905 00:38:38.484737 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db6ad7521d0b86395128479958bf33566ac3e5e2ab9eac3a14e3e2ddb5af7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.484852 kubelet[2750]: E0905 00:38:38.484813 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db6ad7521d0b86395128479958bf33566ac3e5e2ab9eac3a14e3e2ddb5af7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b4zsj" Sep 5 00:38:38.484852 kubelet[2750]: E0905 00:38:38.484840 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db6ad7521d0b86395128479958bf33566ac3e5e2ab9eac3a14e3e2ddb5af7c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b4zsj" Sep 5 00:38:38.484988 kubelet[2750]: E0905 00:38:38.484925 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b4zsj_kube-system(9be9c6bc-d4fa-4355-af97-766dd6a9dd95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b4zsj_kube-system(9be9c6bc-d4fa-4355-af97-766dd6a9dd95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3db6ad7521d0b86395128479958bf33566ac3e5e2ab9eac3a14e3e2ddb5af7c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b4zsj" podUID="9be9c6bc-d4fa-4355-af97-766dd6a9dd95" Sep 5 00:38:38.494049 containerd[1557]: time="2025-09-05T00:38:38.493991063Z" level=error msg="Failed to destroy network for sandbox \"d7b35b5ae6dd3e507870f3f6c74471926c41265382b363b154800f69c361e662\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.496014 containerd[1557]: time="2025-09-05T00:38:38.495971360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-wmjmq,Uid:3dd04056-bdde-4727-9056-bfa7c50d6ac8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b35b5ae6dd3e507870f3f6c74471926c41265382b363b154800f69c361e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.496648 kubelet[2750]: E0905 00:38:38.496594 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b35b5ae6dd3e507870f3f6c74471926c41265382b363b154800f69c361e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.496702 kubelet[2750]: E0905 00:38:38.496664 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b35b5ae6dd3e507870f3f6c74471926c41265382b363b154800f69c361e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698d6f7d76-wmjmq" Sep 5 00:38:38.496702 kubelet[2750]: E0905 00:38:38.496691 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b35b5ae6dd3e507870f3f6c74471926c41265382b363b154800f69c361e662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-698d6f7d76-wmjmq" Sep 5 00:38:38.496785 kubelet[2750]: E0905 00:38:38.496753 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-698d6f7d76-wmjmq_calico-apiserver(3dd04056-bdde-4727-9056-bfa7c50d6ac8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-698d6f7d76-wmjmq_calico-apiserver(3dd04056-bdde-4727-9056-bfa7c50d6ac8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b35b5ae6dd3e507870f3f6c74471926c41265382b363b154800f69c361e662\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-698d6f7d76-wmjmq" podUID="3dd04056-bdde-4727-9056-bfa7c50d6ac8" Sep 5 00:38:38.508718 containerd[1557]: time="2025-09-05T00:38:38.508660695Z" level=error msg="Failed to destroy network for sandbox \"3f481dcc0b7363a7739e63478863678f5e049a0953b57daa50783b5e86a8ebc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.510595 containerd[1557]: time="2025-09-05T00:38:38.510456545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hc2pv,Uid:c0127756-c384-4744-b352-fbc5dbe7777e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f481dcc0b7363a7739e63478863678f5e049a0953b57daa50783b5e86a8ebc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.510756 kubelet[2750]: E0905 00:38:38.510704 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f481dcc0b7363a7739e63478863678f5e049a0953b57daa50783b5e86a8ebc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.510816 kubelet[2750]: E0905 00:38:38.510783 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f481dcc0b7363a7739e63478863678f5e049a0953b57daa50783b5e86a8ebc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hc2pv" Sep 5 00:38:38.510816 kubelet[2750]: E0905 00:38:38.510808 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f481dcc0b7363a7739e63478863678f5e049a0953b57daa50783b5e86a8ebc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hc2pv" Sep 5 00:38:38.510900 kubelet[2750]: E0905 00:38:38.510856 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hc2pv_calico-system(c0127756-c384-4744-b352-fbc5dbe7777e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hc2pv_calico-system(c0127756-c384-4744-b352-fbc5dbe7777e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f481dcc0b7363a7739e63478863678f5e049a0953b57daa50783b5e86a8ebc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hc2pv" podUID="c0127756-c384-4744-b352-fbc5dbe7777e" Sep 5 00:38:38.531650 containerd[1557]: time="2025-09-05T00:38:38.531591720Z" level=error msg="Failed to destroy network for sandbox \"b2926b8dff68af45189abd9e42d41b93c90c29505d4363965a65fca707f9b74f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.533328 containerd[1557]: time="2025-09-05T00:38:38.533286111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2q5c,Uid:3a81d938-f803-44e7-bad4-ecbbbed1be77,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2926b8dff68af45189abd9e42d41b93c90c29505d4363965a65fca707f9b74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.533581 kubelet[2750]: E0905 00:38:38.533532 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2926b8dff68af45189abd9e42d41b93c90c29505d4363965a65fca707f9b74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.533651 kubelet[2750]: E0905 00:38:38.533607 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2926b8dff68af45189abd9e42d41b93c90c29505d4363965a65fca707f9b74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b2q5c" Sep 5 00:38:38.533651 kubelet[2750]: E0905 00:38:38.533630 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2926b8dff68af45189abd9e42d41b93c90c29505d4363965a65fca707f9b74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b2q5c" Sep 5 00:38:38.533977 kubelet[2750]: E0905 00:38:38.533928 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b2q5c_kube-system(3a81d938-f803-44e7-bad4-ecbbbed1be77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b2q5c_kube-system(3a81d938-f803-44e7-bad4-ecbbbed1be77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2926b8dff68af45189abd9e42d41b93c90c29505d4363965a65fca707f9b74f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b2q5c" podUID="3a81d938-f803-44e7-bad4-ecbbbed1be77" Sep 5 00:38:38.543479 containerd[1557]: time="2025-09-05T00:38:38.543425059Z" level=error msg="Failed to destroy network for sandbox \"dea1201ebc6cf995836c6a8f33d167c4e6bd7d7594d3b8db6f537227d1191ba2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.544758 containerd[1557]: time="2025-09-05T00:38:38.544725640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff7c6455b-rdkxh,Uid:c8330416-e46c-496b-9770-5e235046f8d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea1201ebc6cf995836c6a8f33d167c4e6bd7d7594d3b8db6f537227d1191ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.545047 kubelet[2750]: E0905 00:38:38.544973 2750 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea1201ebc6cf995836c6a8f33d167c4e6bd7d7594d3b8db6f537227d1191ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:38:38.545047 kubelet[2750]: E0905 00:38:38.545057 2750 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea1201ebc6cf995836c6a8f33d167c4e6bd7d7594d3b8db6f537227d1191ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7ff7c6455b-rdkxh" Sep 5 00:38:38.545217 kubelet[2750]: E0905 00:38:38.545088 2750 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea1201ebc6cf995836c6a8f33d167c4e6bd7d7594d3b8db6f537227d1191ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7ff7c6455b-rdkxh" Sep 5 00:38:38.545217 kubelet[2750]: E0905 00:38:38.545145 2750 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7ff7c6455b-rdkxh_calico-system(c8330416-e46c-496b-9770-5e235046f8d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7ff7c6455b-rdkxh_calico-system(c8330416-e46c-496b-9770-5e235046f8d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dea1201ebc6cf995836c6a8f33d167c4e6bd7d7594d3b8db6f537227d1191ba2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7ff7c6455b-rdkxh" podUID="c8330416-e46c-496b-9770-5e235046f8d7" Sep 5 00:38:38.596000 containerd[1557]: time="2025-09-05T00:38:38.595833886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:38:39.334270 systemd[1]: run-netns-cni\x2d46da8fd2\x2d7f77\x2d04c0\x2d2140\x2d32ab6fe7c620.mount: Deactivated successfully. Sep 5 00:38:39.334392 systemd[1]: run-netns-cni\x2d6a719aaa\x2d6b77\x2d1595\x2d545a\x2daae1cd18a9b1.mount: Deactivated successfully. Sep 5 00:38:39.334489 systemd[1]: run-netns-cni\x2d723e8af3\x2deb56\x2d2e94\x2d452f\x2d3801f657b230.mount: Deactivated successfully. Sep 5 00:38:39.334566 systemd[1]: run-netns-cni\x2dc1cd11c4\x2d67d6\x2d1f63\x2d5a8f\x2d8bfeeb5c8c35.mount: Deactivated successfully. Sep 5 00:38:44.785407 kubelet[2750]: I0905 00:38:44.785251 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:38:44.786573 kubelet[2750]: E0905 00:38:44.786549 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:45.346694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount241515026.mount: Deactivated successfully. Sep 5 00:38:45.610029 kubelet[2750]: E0905 00:38:45.609835 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:46.696386 kubelet[2750]: E0905 00:38:46.696304 2750 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.231s" Sep 5 00:38:47.102338 containerd[1557]: time="2025-09-05T00:38:47.102230166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:47.188011 containerd[1557]: time="2025-09-05T00:38:47.187927764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:38:47.258038 containerd[1557]: time="2025-09-05T00:38:47.257958180Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:47.328320 containerd[1557]: time="2025-09-05T00:38:47.328225806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:47.328945 containerd[1557]: time="2025-09-05T00:38:47.328904994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.732999212s" Sep 5 00:38:47.329036 containerd[1557]: time="2025-09-05T00:38:47.328952676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:38:47.474737 containerd[1557]: time="2025-09-05T00:38:47.474545041Z" level=info msg="CreateContainer within sandbox \"0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:38:47.738990 containerd[1557]: time="2025-09-05T00:38:47.738172816Z" level=info msg="Container ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:47.764642 containerd[1557]: time="2025-09-05T00:38:47.764573447Z" level=info msg="CreateContainer within sandbox \"0a13d0de9f8b6a2bda471f8e4c6f15df3e6981cc55fd4f0dd612b6dc00b2bd02\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2\"" Sep 5 00:38:47.765521 containerd[1557]: time="2025-09-05T00:38:47.765462913Z" level=info msg="StartContainer for \"ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2\"" Sep 5 00:38:47.767648 containerd[1557]: time="2025-09-05T00:38:47.767576936Z" level=info msg="connecting to shim ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2" address="unix:///run/containerd/s/a3164b3e044229cb16d85d5029b01270d4d81f01f13c0736db91b92fa0be101e" protocol=ttrpc version=3 Sep 5 00:38:47.793544 systemd[1]: Started cri-containerd-ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2.scope - libcontainer container ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2. Sep 5 00:38:48.210972 containerd[1557]: time="2025-09-05T00:38:48.210911106Z" level=info msg="StartContainer for \"ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2\" returns successfully" Sep 5 00:38:48.222504 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:38:48.224118 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:38:48.825961 containerd[1557]: time="2025-09-05T00:38:48.825894104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2\" id:\"a2c8457d33d533aa25c009bf764523d843a3a0f658d94fe3a786a7f2530c0c00\" pid:3873 exit_status:1 exited_at:{seconds:1757032728 nanos:825387071}" Sep 5 00:38:49.007456 kubelet[2750]: I0905 00:38:49.007360 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tdz7w" podStartSLOduration=2.823031394 podStartE2EDuration="32.007338984s" podCreationTimestamp="2025-09-05 00:38:17 +0000 UTC" firstStartedPulling="2025-09-05 00:38:18.145586493 +0000 UTC m=+24.775620732" lastFinishedPulling="2025-09-05 00:38:47.329894084 +0000 UTC m=+53.959928322" observedRunningTime="2025-09-05 00:38:49.006393672 +0000 UTC m=+55.636427910" watchObservedRunningTime="2025-09-05 00:38:49.007338984 +0000 UTC m=+55.637373222" Sep 5 00:38:49.149641 kubelet[2750]: I0905 00:38:49.149436 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8330416-e46c-496b-9770-5e235046f8d7-whisker-backend-key-pair\") pod \"c8330416-e46c-496b-9770-5e235046f8d7\" (UID: \"c8330416-e46c-496b-9770-5e235046f8d7\") " Sep 5 00:38:49.149641 kubelet[2750]: I0905 00:38:49.149523 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpb9c\" (UniqueName: \"kubernetes.io/projected/c8330416-e46c-496b-9770-5e235046f8d7-kube-api-access-dpb9c\") pod \"c8330416-e46c-496b-9770-5e235046f8d7\" (UID: \"c8330416-e46c-496b-9770-5e235046f8d7\") " Sep 5 00:38:49.149641 kubelet[2750]: I0905 00:38:49.149559 2750 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8330416-e46c-496b-9770-5e235046f8d7-whisker-ca-bundle\") pod \"c8330416-e46c-496b-9770-5e235046f8d7\" (UID: \"c8330416-e46c-496b-9770-5e235046f8d7\") " Sep 5 00:38:49.150310 kubelet[2750]: I0905 00:38:49.150268 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8330416-e46c-496b-9770-5e235046f8d7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c8330416-e46c-496b-9770-5e235046f8d7" (UID: "c8330416-e46c-496b-9770-5e235046f8d7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:38:49.155377 kubelet[2750]: I0905 00:38:49.155276 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8330416-e46c-496b-9770-5e235046f8d7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c8330416-e46c-496b-9770-5e235046f8d7" (UID: "c8330416-e46c-496b-9770-5e235046f8d7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:38:49.155377 kubelet[2750]: I0905 00:38:49.155321 2750 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8330416-e46c-496b-9770-5e235046f8d7-kube-api-access-dpb9c" (OuterVolumeSpecName: "kube-api-access-dpb9c") pod "c8330416-e46c-496b-9770-5e235046f8d7" (UID: "c8330416-e46c-496b-9770-5e235046f8d7"). InnerVolumeSpecName "kube-api-access-dpb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:38:49.156238 systemd[1]: var-lib-kubelet-pods-c8330416\x2de46c\x2d496b\x2d9770\x2d5e235046f8d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpb9c.mount: Deactivated successfully. Sep 5 00:38:49.156402 systemd[1]: var-lib-kubelet-pods-c8330416\x2de46c\x2d496b\x2d9770\x2d5e235046f8d7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:38:49.249937 kubelet[2750]: I0905 00:38:49.249882 2750 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dpb9c\" (UniqueName: \"kubernetes.io/projected/c8330416-e46c-496b-9770-5e235046f8d7-kube-api-access-dpb9c\") on node \"localhost\" DevicePath \"\"" Sep 5 00:38:49.249937 kubelet[2750]: I0905 00:38:49.249921 2750 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8330416-e46c-496b-9770-5e235046f8d7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:38:49.249937 kubelet[2750]: I0905 00:38:49.249930 2750 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8330416-e46c-496b-9770-5e235046f8d7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:38:49.466347 containerd[1557]: time="2025-09-05T00:38:49.466196537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hc2pv,Uid:c0127756-c384-4744-b352-fbc5dbe7777e,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:49.476616 systemd[1]: Removed slice kubepods-besteffort-podc8330416_e46c_496b_9770_5e235046f8d7.slice - libcontainer container kubepods-besteffort-podc8330416_e46c_496b_9770_5e235046f8d7.slice. Sep 5 00:38:49.710406 containerd[1557]: time="2025-09-05T00:38:49.710335487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2\" id:\"d0eb840b6f395be5b683dc9da9c3603dca0afc36dde00c776fb4afcb005c4a51\" pid:3924 exit_status:1 exited_at:{seconds:1757032729 nanos:709890014}" Sep 5 00:38:49.976061 systemd[1]: Created slice kubepods-besteffort-podab218a9c_7c61_4c5d_8c43_1a0f2f1edb7d.slice - libcontainer container kubepods-besteffort-podab218a9c_7c61_4c5d_8c43_1a0f2f1edb7d.slice. Sep 5 00:38:50.057988 kubelet[2750]: I0905 00:38:50.057926 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz9nb\" (UniqueName: \"kubernetes.io/projected/ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d-kube-api-access-lz9nb\") pod \"whisker-568cf579f5-58wll\" (UID: \"ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d\") " pod="calico-system/whisker-568cf579f5-58wll" Sep 5 00:38:50.057988 kubelet[2750]: I0905 00:38:50.057986 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d-whisker-ca-bundle\") pod \"whisker-568cf579f5-58wll\" (UID: \"ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d\") " pod="calico-system/whisker-568cf579f5-58wll" Sep 5 00:38:50.058522 kubelet[2750]: I0905 00:38:50.058013 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d-whisker-backend-key-pair\") pod \"whisker-568cf579f5-58wll\" (UID: \"ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d\") " pod="calico-system/whisker-568cf579f5-58wll" Sep 5 00:38:50.163770 systemd-networkd[1486]: cali22b1de8b300: Link UP Sep 5 00:38:50.165185 systemd-networkd[1486]: cali22b1de8b300: Gained carrier Sep 5 00:38:50.397425 containerd[1557]: 2025-09-05 00:38:49.577 [INFO][3899] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:38:50.397425 containerd[1557]: 2025-09-05 00:38:49.891 [INFO][3899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--hc2pv-eth0 goldmane-54d579b49d- calico-system c0127756-c384-4744-b352-fbc5dbe7777e 864 0 2025-09-05 00:38:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-hc2pv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali22b1de8b300 [] [] }} ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-" Sep 5 00:38:50.397425 containerd[1557]: 2025-09-05 00:38:49.891 [INFO][3899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" Sep 5 00:38:50.397425 containerd[1557]: 2025-09-05 00:38:50.065 [INFO][3938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" HandleID="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Workload="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.066 [INFO][3938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" HandleID="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Workload="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033b9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-hc2pv", "timestamp":"2025-09-05 00:38:50.065821332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.066 [INFO][3938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.067 [INFO][3938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.067 [INFO][3938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.079 [INFO][3938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" host="localhost" Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.086 [INFO][3938] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.094 [INFO][3938] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.096 [INFO][3938] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.098 [INFO][3938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:50.397762 containerd[1557]: 2025-09-05 00:38:50.098 [INFO][3938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" host="localhost" Sep 5 00:38:50.398079 containerd[1557]: 2025-09-05 00:38:50.100 [INFO][3938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e Sep 5 00:38:50.398079 containerd[1557]: 2025-09-05 00:38:50.104 [INFO][3938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" host="localhost" Sep 5 00:38:50.398079 containerd[1557]: 2025-09-05 00:38:50.123 [INFO][3938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" host="localhost" Sep 5 00:38:50.398079 containerd[1557]: 2025-09-05 00:38:50.123 [INFO][3938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" host="localhost" Sep 5 00:38:50.398079 containerd[1557]: 2025-09-05 00:38:50.123 [INFO][3938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:50.398079 containerd[1557]: 2025-09-05 00:38:50.123 [INFO][3938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" HandleID="k8s-pod-network.2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Workload="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" Sep 5 00:38:50.398237 containerd[1557]: 2025-09-05 00:38:50.127 [INFO][3899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hc2pv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c0127756-c384-4744-b352-fbc5dbe7777e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-hc2pv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22b1de8b300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:50.398237 containerd[1557]: 2025-09-05 00:38:50.127 [INFO][3899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" Sep 5 00:38:50.398332 containerd[1557]: 2025-09-05 00:38:50.127 [INFO][3899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22b1de8b300 ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" Sep 5 00:38:50.398332 containerd[1557]: 2025-09-05 00:38:50.173 [INFO][3899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" Sep 5 00:38:50.398393 containerd[1557]: 2025-09-05 00:38:50.174 [INFO][3899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hc2pv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c0127756-c384-4744-b352-fbc5dbe7777e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e", Pod:"goldmane-54d579b49d-hc2pv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22b1de8b300", MAC:"e2:cd:ff:62:93:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:50.398450 containerd[1557]: 2025-09-05 00:38:50.393 [INFO][3899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" Namespace="calico-system" Pod="goldmane-54d579b49d-hc2pv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hc2pv-eth0" Sep 5 00:38:50.583059 containerd[1557]: time="2025-09-05T00:38:50.582991961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-568cf579f5-58wll,Uid:ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:50.850154 containerd[1557]: time="2025-09-05T00:38:50.850075702Z" level=info msg="connecting to shim 2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e" address="unix:///run/containerd/s/8a596c97a2243c28feef9271f39db37848f909d75efa9da0efa4770b927742ae" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:50.893133 systemd[1]: Started cri-containerd-2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e.scope - libcontainer container 2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e. Sep 5 00:38:50.918428 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:50.936036 systemd-networkd[1486]: cali104d149a6b0: Link UP Sep 5 00:38:50.939444 systemd-networkd[1486]: cali104d149a6b0: Gained carrier Sep 5 00:38:50.970154 containerd[1557]: 2025-09-05 00:38:50.812 [INFO][4061] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--568cf579f5--58wll-eth0 whisker-568cf579f5- calico-system ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d 948 0 2025-09-05 00:38:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:568cf579f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-568cf579f5-58wll eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali104d149a6b0 [] [] }} ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-" Sep 5 00:38:50.970154 containerd[1557]: 2025-09-05 00:38:50.812 [INFO][4061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-eth0" Sep 5 00:38:50.970154 containerd[1557]: 2025-09-05 00:38:50.875 [INFO][4108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" HandleID="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Workload="localhost-k8s-whisker--568cf579f5--58wll-eth0" Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.876 [INFO][4108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" HandleID="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Workload="localhost-k8s-whisker--568cf579f5--58wll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f660), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-568cf579f5-58wll", "timestamp":"2025-09-05 00:38:50.875963091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.876 [INFO][4108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.876 [INFO][4108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.876 [INFO][4108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.885 [INFO][4108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" host="localhost" Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.896 [INFO][4108] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.903 [INFO][4108] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.905 [INFO][4108] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.909 [INFO][4108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:50.970443 containerd[1557]: 2025-09-05 00:38:50.909 [INFO][4108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" host="localhost" Sep 5 00:38:50.970824 containerd[1557]: 2025-09-05 00:38:50.911 [INFO][4108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a Sep 5 00:38:50.970824 containerd[1557]: 2025-09-05 00:38:50.915 [INFO][4108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" host="localhost" Sep 5 00:38:50.970824 containerd[1557]: 2025-09-05 00:38:50.925 [INFO][4108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" host="localhost" Sep 5 00:38:50.970824 containerd[1557]: 2025-09-05 00:38:50.925 [INFO][4108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" host="localhost" Sep 5 00:38:50.970824 containerd[1557]: 2025-09-05 00:38:50.925 [INFO][4108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:50.970824 containerd[1557]: 2025-09-05 00:38:50.925 [INFO][4108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" HandleID="k8s-pod-network.98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Workload="localhost-k8s-whisker--568cf579f5--58wll-eth0" Sep 5 00:38:50.971049 containerd[1557]: 2025-09-05 00:38:50.932 [INFO][4061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--568cf579f5--58wll-eth0", GenerateName:"whisker-568cf579f5-", Namespace:"calico-system", SelfLink:"", UID:"ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"568cf579f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-568cf579f5-58wll", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali104d149a6b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:50.971049 containerd[1557]: 2025-09-05 00:38:50.932 [INFO][4061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-eth0" Sep 5 00:38:50.971154 containerd[1557]: 2025-09-05 00:38:50.933 [INFO][4061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali104d149a6b0 ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-eth0" Sep 5 00:38:50.971154 containerd[1557]: 2025-09-05 00:38:50.943 [INFO][4061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-eth0" Sep 5 00:38:50.971213 containerd[1557]: 2025-09-05 00:38:50.945 [INFO][4061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--568cf579f5--58wll-eth0", GenerateName:"whisker-568cf579f5-", Namespace:"calico-system", SelfLink:"", UID:"ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"568cf579f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a", Pod:"whisker-568cf579f5-58wll", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali104d149a6b0", MAC:"9e:d4:46:01:c1:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:50.971277 containerd[1557]: 2025-09-05 00:38:50.959 [INFO][4061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" Namespace="calico-system" Pod="whisker-568cf579f5-58wll" WorkloadEndpoint="localhost-k8s-whisker--568cf579f5--58wll-eth0" Sep 5 00:38:51.030161 containerd[1557]: time="2025-09-05T00:38:51.030112746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hc2pv,Uid:c0127756-c384-4744-b352-fbc5dbe7777e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e\"" Sep 5 00:38:51.032919 containerd[1557]: time="2025-09-05T00:38:51.032527044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:38:51.059155 containerd[1557]: time="2025-09-05T00:38:51.058502111Z" level=info msg="connecting to shim 98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a" address="unix:///run/containerd/s/b77eb6571934b00bc81d0a99dde771ccdbe6ba57f43eb93178c595e1813c3e2d" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:51.103159 systemd[1]: Started cri-containerd-98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a.scope - libcontainer container 98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a. Sep 5 00:38:51.135478 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:51.156268 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:43642.service - OpenSSH per-connection server daemon (10.0.0.1:43642). Sep 5 00:38:51.213933 systemd-networkd[1486]: vxlan.calico: Link UP Sep 5 00:38:51.213945 systemd-networkd[1486]: vxlan.calico: Gained carrier Sep 5 00:38:51.225047 containerd[1557]: time="2025-09-05T00:38:51.225001731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-568cf579f5-58wll,Uid:ab218a9c-7c61-4c5d-8c43-1a0f2f1edb7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a\"" Sep 5 00:38:51.274388 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 43642 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:38:51.276324 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:51.283211 systemd-logind[1536]: New session 10 of user core. Sep 5 00:38:51.291093 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:38:51.463909 sshd[4251]: Connection closed by 10.0.0.1 port 43642 Sep 5 00:38:51.465124 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:51.465998 kubelet[2750]: E0905 00:38:51.465303 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:51.466360 containerd[1557]: time="2025-09-05T00:38:51.466209599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2q5c,Uid:3a81d938-f803-44e7-bad4-ecbbbed1be77,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:51.476404 kubelet[2750]: I0905 00:38:51.476153 2750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8330416-e46c-496b-9770-5e235046f8d7" path="/var/lib/kubelet/pods/c8330416-e46c-496b-9770-5e235046f8d7/volumes" Sep 5 00:38:51.478471 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:43642.service: Deactivated successfully. Sep 5 00:38:51.482610 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:38:51.486972 systemd-logind[1536]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:38:51.489094 systemd-logind[1536]: Removed session 10. Sep 5 00:38:51.559159 systemd-networkd[1486]: cali22b1de8b300: Gained IPv6LL Sep 5 00:38:51.636705 systemd-networkd[1486]: caliea686619323: Link UP Sep 5 00:38:51.638349 systemd-networkd[1486]: caliea686619323: Gained carrier Sep 5 00:38:51.771635 containerd[1557]: 2025-09-05 00:38:51.524 [INFO][4265] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0 coredns-674b8bbfcf- kube-system 3a81d938-f803-44e7-bad4-ecbbbed1be77 867 0 2025-09-05 00:38:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-b2q5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliea686619323 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-" Sep 5 00:38:51.771635 containerd[1557]: 2025-09-05 00:38:51.524 [INFO][4265] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" Sep 5 00:38:51.771635 containerd[1557]: 2025-09-05 00:38:51.572 [INFO][4290] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" HandleID="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Workload="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.573 [INFO][4290] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" HandleID="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Workload="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139670), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-b2q5c", "timestamp":"2025-09-05 00:38:51.572962089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.573 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.573 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.573 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.583 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" host="localhost" Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.596 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.600 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.602 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.605 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:51.772308 containerd[1557]: 2025-09-05 00:38:51.605 [INFO][4290] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" host="localhost" Sep 5 00:38:51.772605 containerd[1557]: 2025-09-05 00:38:51.607 [INFO][4290] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a Sep 5 00:38:51.772605 containerd[1557]: 2025-09-05 00:38:51.613 [INFO][4290] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" host="localhost" Sep 5 00:38:51.772605 containerd[1557]: 2025-09-05 00:38:51.620 [INFO][4290] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" host="localhost" Sep 5 00:38:51.772605 containerd[1557]: 2025-09-05 00:38:51.620 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" host="localhost" Sep 5 00:38:51.772605 containerd[1557]: 2025-09-05 00:38:51.620 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:51.772605 containerd[1557]: 2025-09-05 00:38:51.620 [INFO][4290] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" HandleID="k8s-pod-network.dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Workload="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" Sep 5 00:38:51.772811 containerd[1557]: 2025-09-05 00:38:51.632 [INFO][4265] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3a81d938-f803-44e7-bad4-ecbbbed1be77", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-b2q5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea686619323", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:51.772941 containerd[1557]: 2025-09-05 00:38:51.632 [INFO][4265] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" Sep 5 00:38:51.772941 containerd[1557]: 2025-09-05 00:38:51.632 [INFO][4265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea686619323 ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" Sep 5 00:38:51.772941 containerd[1557]: 2025-09-05 00:38:51.637 [INFO][4265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" Sep 5 00:38:51.773036 containerd[1557]: 2025-09-05 00:38:51.637 [INFO][4265] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3a81d938-f803-44e7-bad4-ecbbbed1be77", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a", Pod:"coredns-674b8bbfcf-b2q5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea686619323", MAC:"76:9a:53:89:f4:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:51.773036 containerd[1557]: 2025-09-05 00:38:51.765 [INFO][4265] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" Namespace="kube-system" Pod="coredns-674b8bbfcf-b2q5c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b2q5c-eth0" Sep 5 00:38:51.836926 containerd[1557]: time="2025-09-05T00:38:51.836030719Z" level=info msg="connecting to shim dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a" address="unix:///run/containerd/s/34d8535c7ce1badc07a4a28fd7773549613ee35b58efa4b97982b8d0119a38e0" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:51.868107 systemd[1]: Started cri-containerd-dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a.scope - libcontainer container dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a. Sep 5 00:38:51.883934 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:52.019422 containerd[1557]: time="2025-09-05T00:38:52.019368940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b2q5c,Uid:3a81d938-f803-44e7-bad4-ecbbbed1be77,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a\"" Sep 5 00:38:52.020557 kubelet[2750]: E0905 00:38:52.020522 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:52.105071 containerd[1557]: time="2025-09-05T00:38:52.104849983Z" level=info msg="CreateContainer within sandbox \"dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:38:52.164011 containerd[1557]: time="2025-09-05T00:38:52.163274552Z" level=info msg="Container a74cf787c90918bbf9fb45607ac8e3d034310a91f0b10e892790dee1c6212326: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:52.165040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068852920.mount: Deactivated successfully. Sep 5 00:38:52.186358 containerd[1557]: time="2025-09-05T00:38:52.186278166Z" level=info msg="CreateContainer within sandbox \"dcc4f9bfaca6a410272f0db7c71cdd7f473cb929812e2837200c423698cf414a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a74cf787c90918bbf9fb45607ac8e3d034310a91f0b10e892790dee1c6212326\"" Sep 5 00:38:52.187595 containerd[1557]: time="2025-09-05T00:38:52.187320510Z" level=info msg="StartContainer for \"a74cf787c90918bbf9fb45607ac8e3d034310a91f0b10e892790dee1c6212326\"" Sep 5 00:38:52.188491 containerd[1557]: time="2025-09-05T00:38:52.188443410Z" level=info msg="connecting to shim a74cf787c90918bbf9fb45607ac8e3d034310a91f0b10e892790dee1c6212326" address="unix:///run/containerd/s/34d8535c7ce1badc07a4a28fd7773549613ee35b58efa4b97982b8d0119a38e0" protocol=ttrpc version=3 Sep 5 00:38:52.214213 systemd[1]: Started cri-containerd-a74cf787c90918bbf9fb45607ac8e3d034310a91f0b10e892790dee1c6212326.scope - libcontainer container a74cf787c90918bbf9fb45607ac8e3d034310a91f0b10e892790dee1c6212326. Sep 5 00:38:52.261509 containerd[1557]: time="2025-09-05T00:38:52.261454551Z" level=info msg="StartContainer for \"a74cf787c90918bbf9fb45607ac8e3d034310a91f0b10e892790dee1c6212326\" returns successfully" Sep 5 00:38:52.391428 systemd-networkd[1486]: cali104d149a6b0: Gained IPv6LL Sep 5 00:38:52.467171 containerd[1557]: time="2025-09-05T00:38:52.467103395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d758f56b-zdnzt,Uid:ab9d8dd2-569f-46bb-90a7-db0dbd3069b5,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:52.467171 containerd[1557]: time="2025-09-05T00:38:52.467159223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-wmjmq,Uid:3dd04056-bdde-4727-9056-bfa7c50d6ac8,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:38:52.467446 containerd[1557]: time="2025-09-05T00:38:52.467405909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-9qbb6,Uid:0b3f380c-3b71-4b6b-b407-7dc3faecece7,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:38:52.647753 kubelet[2750]: E0905 00:38:52.647620 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:52.775086 systemd-networkd[1486]: vxlan.calico: Gained IPv6LL Sep 5 00:38:52.940466 kubelet[2750]: I0905 00:38:52.940249 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b2q5c" podStartSLOduration=52.940232203 podStartE2EDuration="52.940232203s" podCreationTimestamp="2025-09-05 00:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:52.93962462 +0000 UTC m=+59.569658868" watchObservedRunningTime="2025-09-05 00:38:52.940232203 +0000 UTC m=+59.570266441" Sep 5 00:38:53.288196 systemd-networkd[1486]: caliea686619323: Gained IPv6LL Sep 5 00:38:53.467300 kubelet[2750]: E0905 00:38:53.467254 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:53.467457 containerd[1557]: time="2025-09-05T00:38:53.467318292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhkp8,Uid:3d2da156-f5da-43b6-8661-d14dd051f3ef,Namespace:calico-system,Attempt:0,}" Sep 5 00:38:53.468549 containerd[1557]: time="2025-09-05T00:38:53.468510924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4zsj,Uid:9be9c6bc-d4fa-4355-af97-766dd6a9dd95,Namespace:kube-system,Attempt:0,}" Sep 5 00:38:53.534158 systemd-networkd[1486]: cali715fa734f81: Link UP Sep 5 00:38:53.535350 systemd-networkd[1486]: cali715fa734f81: Gained carrier Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:52.654 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0 calico-apiserver-698d6f7d76- calico-apiserver 3dd04056-bdde-4727-9056-bfa7c50d6ac8 866 0 2025-09-05 00:38:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698d6f7d76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-698d6f7d76-wmjmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali715fa734f81 [] [] }} ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:52.654 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:52.963 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" HandleID="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Workload="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:52.963 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" HandleID="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Workload="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042e730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-698d6f7d76-wmjmq", "timestamp":"2025-09-05 00:38:52.963554002 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:52.963 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:52.963 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:52.963 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.003 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.291 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.297 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.299 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.302 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.302 [INFO][4460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.304 [INFO][4460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.450 [INFO][4460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.527 [INFO][4460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.527 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" host="localhost" Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.527 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:53.564036 containerd[1557]: 2025-09-05 00:38:53.527 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" HandleID="k8s-pod-network.1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Workload="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" Sep 5 00:38:53.564829 containerd[1557]: 2025-09-05 00:38:53.530 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0", GenerateName:"calico-apiserver-698d6f7d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dd04056-bdde-4727-9056-bfa7c50d6ac8", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698d6f7d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-698d6f7d76-wmjmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali715fa734f81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:53.564829 containerd[1557]: 2025-09-05 00:38:53.530 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" Sep 5 00:38:53.564829 containerd[1557]: 2025-09-05 00:38:53.530 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali715fa734f81 ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" Sep 5 00:38:53.564829 containerd[1557]: 2025-09-05 00:38:53.534 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" Sep 5 00:38:53.564829 containerd[1557]: 2025-09-05 00:38:53.535 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0", GenerateName:"calico-apiserver-698d6f7d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dd04056-bdde-4727-9056-bfa7c50d6ac8", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698d6f7d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d", Pod:"calico-apiserver-698d6f7d76-wmjmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali715fa734f81", MAC:"f6:a9:00:ee:cd:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:53.564829 containerd[1557]: 2025-09-05 00:38:53.556 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-wmjmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--wmjmq-eth0" Sep 5 00:38:53.650908 kubelet[2750]: E0905 00:38:53.649301 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:53.724902 systemd-networkd[1486]: calibe8d19bd681: Link UP Sep 5 00:38:53.727133 systemd-networkd[1486]: calibe8d19bd681: Gained carrier Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.028 [INFO][4417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0 calico-kube-controllers-86d758f56b- calico-system ab9d8dd2-569f-46bb-90a7-db0dbd3069b5 859 0 2025-09-05 00:38:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d758f56b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86d758f56b-zdnzt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibe8d19bd681 [] [] }} ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.029 [INFO][4417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.358 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" HandleID="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Workload="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.358 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" HandleID="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Workload="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86d758f56b-zdnzt", "timestamp":"2025-09-05 00:38:53.358454749 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.358 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.528 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.528 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.538 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.560 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.569 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.572 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.575 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.575 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.578 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6 Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.638 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.707 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.707 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" host="localhost" Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.707 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:53.795431 containerd[1557]: 2025-09-05 00:38:53.707 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" HandleID="k8s-pod-network.33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Workload="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" Sep 5 00:38:53.797655 containerd[1557]: 2025-09-05 00:38:53.714 [INFO][4417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0", GenerateName:"calico-kube-controllers-86d758f56b-", Namespace:"calico-system", SelfLink:"", UID:"ab9d8dd2-569f-46bb-90a7-db0dbd3069b5", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d758f56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86d758f56b-zdnzt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibe8d19bd681", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:53.797655 containerd[1557]: 2025-09-05 00:38:53.714 [INFO][4417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" Sep 5 00:38:53.797655 containerd[1557]: 2025-09-05 00:38:53.714 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe8d19bd681 ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" Sep 5 00:38:53.797655 containerd[1557]: 2025-09-05 00:38:53.725 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" Sep 5 00:38:53.797655 containerd[1557]: 2025-09-05 00:38:53.728 [INFO][4417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0", GenerateName:"calico-kube-controllers-86d758f56b-", Namespace:"calico-system", SelfLink:"", UID:"ab9d8dd2-569f-46bb-90a7-db0dbd3069b5", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d758f56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6", Pod:"calico-kube-controllers-86d758f56b-zdnzt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibe8d19bd681", MAC:"0e:78:48:09:61:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:53.797655 containerd[1557]: 2025-09-05 00:38:53.770 [INFO][4417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" Namespace="calico-system" Pod="calico-kube-controllers-86d758f56b-zdnzt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d758f56b--zdnzt-eth0" Sep 5 00:38:53.807245 containerd[1557]: time="2025-09-05T00:38:53.807090195Z" level=info msg="connecting to shim 1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d" address="unix:///run/containerd/s/4671665751661c0090058945d395671efc90b4364373fb6a7f53ccf432ad91da" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:53.876127 systemd[1]: Started cri-containerd-1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d.scope - libcontainer container 1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d. Sep 5 00:38:53.893693 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:53.985844 containerd[1557]: time="2025-09-05T00:38:53.985792743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-wmjmq,Uid:3dd04056-bdde-4727-9056-bfa7c50d6ac8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d\"" Sep 5 00:38:54.008972 systemd-networkd[1486]: cali1af5a32cb74: Link UP Sep 5 00:38:54.009646 systemd-networkd[1486]: cali1af5a32cb74: Gained carrier Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.028 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0 calico-apiserver-698d6f7d76- calico-apiserver 0b3f380c-3b71-4b6b-b407-7dc3faecece7 863 0 2025-09-05 00:38:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:698d6f7d76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-698d6f7d76-9qbb6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1af5a32cb74 [] [] }} ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.029 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.361 [INFO][4475] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" HandleID="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Workload="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.361 [INFO][4475] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" HandleID="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Workload="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-698d6f7d76-9qbb6", "timestamp":"2025-09-05 00:38:53.361005213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.361 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.707 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.708 [INFO][4475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.781 [INFO][4475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.792 [INFO][4475] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.823 [INFO][4475] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.830 [INFO][4475] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.845 [INFO][4475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.845 [INFO][4475] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.851 [INFO][4475] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60 Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.927 [INFO][4475] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.998 [INFO][4475] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.998 [INFO][4475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" host="localhost" Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.998 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:54.154457 containerd[1557]: 2025-09-05 00:38:53.998 [INFO][4475] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" HandleID="k8s-pod-network.13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Workload="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" Sep 5 00:38:54.155038 containerd[1557]: 2025-09-05 00:38:54.003 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0", GenerateName:"calico-apiserver-698d6f7d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b3f380c-3b71-4b6b-b407-7dc3faecece7", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698d6f7d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-698d6f7d76-9qbb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1af5a32cb74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:54.155038 containerd[1557]: 2025-09-05 00:38:54.003 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" Sep 5 00:38:54.155038 containerd[1557]: 2025-09-05 00:38:54.003 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1af5a32cb74 ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" Sep 5 00:38:54.155038 containerd[1557]: 2025-09-05 00:38:54.009 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" Sep 5 00:38:54.155038 containerd[1557]: 2025-09-05 00:38:54.010 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0", GenerateName:"calico-apiserver-698d6f7d76-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b3f380c-3b71-4b6b-b407-7dc3faecece7", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"698d6f7d76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60", Pod:"calico-apiserver-698d6f7d76-9qbb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1af5a32cb74", MAC:"56:ec:c3:fb:7b:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:54.155038 containerd[1557]: 2025-09-05 00:38:54.147 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" Namespace="calico-apiserver" Pod="calico-apiserver-698d6f7d76-9qbb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--698d6f7d76--9qbb6-eth0" Sep 5 00:38:54.269080 systemd-networkd[1486]: cali79d8e0a27a7: Link UP Sep 5 00:38:54.270150 systemd-networkd[1486]: cali79d8e0a27a7: Gained carrier Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:53.721 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vhkp8-eth0 csi-node-driver- calico-system 3d2da156-f5da-43b6-8661-d14dd051f3ef 731 0 2025-09-05 00:38:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vhkp8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali79d8e0a27a7 [] [] }} ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:53.722 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-eth0" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:53.943 [INFO][4550] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" HandleID="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Workload="localhost-k8s-csi--node--driver--vhkp8-eth0" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:53.943 [INFO][4550] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" HandleID="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Workload="localhost-k8s-csi--node--driver--vhkp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000211840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vhkp8", "timestamp":"2025-09-05 00:38:53.943213463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:53.943 [INFO][4550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:53.998 [INFO][4550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:53.999 [INFO][4550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.147 [INFO][4550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.156 [INFO][4550] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.162 [INFO][4550] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.166 [INFO][4550] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.170 [INFO][4550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.170 [INFO][4550] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.173 [INFO][4550] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4 Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.189 [INFO][4550] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.259 [INFO][4550] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.259 [INFO][4550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" host="localhost" Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.259 [INFO][4550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:54.320548 containerd[1557]: 2025-09-05 00:38:54.259 [INFO][4550] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" HandleID="k8s-pod-network.8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Workload="localhost-k8s-csi--node--driver--vhkp8-eth0" Sep 5 00:38:54.321407 containerd[1557]: 2025-09-05 00:38:54.262 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vhkp8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d2da156-f5da-43b6-8661-d14dd051f3ef", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vhkp8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79d8e0a27a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:54.321407 containerd[1557]: 2025-09-05 00:38:54.262 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-eth0" Sep 5 00:38:54.321407 containerd[1557]: 2025-09-05 00:38:54.262 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79d8e0a27a7 ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-eth0" Sep 5 00:38:54.321407 containerd[1557]: 2025-09-05 00:38:54.270 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-eth0" Sep 5 00:38:54.321407 containerd[1557]: 2025-09-05 00:38:54.271 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vhkp8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d2da156-f5da-43b6-8661-d14dd051f3ef", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4", Pod:"csi-node-driver-vhkp8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali79d8e0a27a7", MAC:"4e:d9:34:b5:bb:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:54.321407 containerd[1557]: 2025-09-05 00:38:54.317 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" Namespace="calico-system" Pod="csi-node-driver-vhkp8" WorkloadEndpoint="localhost-k8s-csi--node--driver--vhkp8-eth0" Sep 5 00:38:54.397828 systemd-networkd[1486]: calic15bbae1e18: Link UP Sep 5 00:38:54.399642 systemd-networkd[1486]: calic15bbae1e18: Gained carrier Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:53.930 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0 coredns-674b8bbfcf- kube-system 9be9c6bc-d4fa-4355-af97-766dd6a9dd95 862 0 2025-09-05 00:38:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-b4zsj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic15bbae1e18 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:53.930 [INFO][4524] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.182 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" HandleID="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Workload="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.182 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" HandleID="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Workload="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005002b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-b4zsj", "timestamp":"2025-09-05 00:38:54.182729101 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.183 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.259 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.260 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.268 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.276 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.320 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.324 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.327 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.327 [INFO][4610] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.329 [INFO][4610] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7 Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.356 [INFO][4610] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.391 [INFO][4610] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.391 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" host="localhost" Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.391 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:38:54.422861 containerd[1557]: 2025-09-05 00:38:54.391 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" HandleID="k8s-pod-network.be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Workload="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" Sep 5 00:38:54.423981 containerd[1557]: 2025-09-05 00:38:54.394 [INFO][4524] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9be9c6bc-d4fa-4355-af97-766dd6a9dd95", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-b4zsj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15bbae1e18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:54.423981 containerd[1557]: 2025-09-05 00:38:54.395 [INFO][4524] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" Sep 5 00:38:54.423981 containerd[1557]: 2025-09-05 00:38:54.395 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic15bbae1e18 ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" Sep 5 00:38:54.423981 containerd[1557]: 2025-09-05 00:38:54.400 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" Sep 5 00:38:54.423981 containerd[1557]: 2025-09-05 00:38:54.400 [INFO][4524] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9be9c6bc-d4fa-4355-af97-766dd6a9dd95", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7", Pod:"coredns-674b8bbfcf-b4zsj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15bbae1e18", MAC:"2a:54:aa:1e:c9:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:38:54.423981 containerd[1557]: 2025-09-05 00:38:54.418 [INFO][4524] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-b4zsj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b4zsj-eth0" Sep 5 00:38:54.488602 containerd[1557]: time="2025-09-05T00:38:54.488512182Z" level=info msg="connecting to shim 33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6" address="unix:///run/containerd/s/92613fac68a8dcc7a399b8baf843a7b938f0d077aca45bd016c452fe3d2e10c2" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:54.518044 systemd[1]: Started cri-containerd-33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6.scope - libcontainer container 33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6. Sep 5 00:38:54.534289 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:54.541499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731684674.mount: Deactivated successfully. Sep 5 00:38:54.651136 kubelet[2750]: E0905 00:38:54.651083 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:54.885277 containerd[1557]: time="2025-09-05T00:38:54.885221894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d758f56b-zdnzt,Uid:ab9d8dd2-569f-46bb-90a7-db0dbd3069b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6\"" Sep 5 00:38:54.951203 systemd-networkd[1486]: cali715fa734f81: Gained IPv6LL Sep 5 00:38:55.144484 systemd-networkd[1486]: cali1af5a32cb74: Gained IPv6LL Sep 5 00:38:55.335183 systemd-networkd[1486]: cali79d8e0a27a7: Gained IPv6LL Sep 5 00:38:55.463065 systemd-networkd[1486]: calibe8d19bd681: Gained IPv6LL Sep 5 00:38:55.653949 kubelet[2750]: E0905 00:38:55.653904 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:56.231247 systemd-networkd[1486]: calic15bbae1e18: Gained IPv6LL Sep 5 00:38:56.484908 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:43658.service - OpenSSH per-connection server daemon (10.0.0.1:43658). Sep 5 00:38:56.603697 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 43658 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:38:56.605939 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:38:56.613512 systemd-logind[1536]: New session 11 of user core. Sep 5 00:38:56.626220 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:38:57.076332 sshd[4704]: Connection closed by 10.0.0.1 port 43658 Sep 5 00:38:57.076714 sshd-session[4695]: pam_unix(sshd:session): session closed for user core Sep 5 00:38:57.082263 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:43658.service: Deactivated successfully. Sep 5 00:38:57.085456 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:38:57.086537 systemd-logind[1536]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:38:57.088639 systemd-logind[1536]: Removed session 11. Sep 5 00:38:57.127255 containerd[1557]: time="2025-09-05T00:38:57.127193262Z" level=info msg="connecting to shim 13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60" address="unix:///run/containerd/s/eba92f8b479182444dbcf71d09dba4e8915a8157e45add3127282e3015422891" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:57.156212 systemd[1]: Started cri-containerd-13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60.scope - libcontainer container 13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60. Sep 5 00:38:57.173398 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:57.451822 containerd[1557]: time="2025-09-05T00:38:57.451696484Z" level=info msg="connecting to shim 8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4" address="unix:///run/containerd/s/7c5cb853f4982ab82eba29738f03eb52054554de7699ba02bec54d1d98a1f0da" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:57.488263 systemd[1]: Started cri-containerd-8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4.scope - libcontainer container 8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4. Sep 5 00:38:57.503685 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:57.672723 containerd[1557]: time="2025-09-05T00:38:57.672674737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-698d6f7d76-9qbb6,Uid:0b3f380c-3b71-4b6b-b407-7dc3faecece7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60\"" Sep 5 00:38:57.724975 containerd[1557]: time="2025-09-05T00:38:57.724703823Z" level=info msg="connecting to shim be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7" address="unix:///run/containerd/s/4d6a79a75a2de52edd77eea6c007989df10ad4b0e3123905f700c06ef07d0320" namespace=k8s.io protocol=ttrpc version=3 Sep 5 00:38:57.725341 containerd[1557]: time="2025-09-05T00:38:57.725001907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vhkp8,Uid:3d2da156-f5da-43b6-8661-d14dd051f3ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4\"" Sep 5 00:38:57.756132 systemd[1]: Started cri-containerd-be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7.scope - libcontainer container be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7. Sep 5 00:38:57.774227 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:38:57.926222 containerd[1557]: time="2025-09-05T00:38:57.926146625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b4zsj,Uid:9be9c6bc-d4fa-4355-af97-766dd6a9dd95,Namespace:kube-system,Attempt:0,} returns sandbox id \"be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7\"" Sep 5 00:38:57.927233 kubelet[2750]: E0905 00:38:57.927199 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:57.935896 containerd[1557]: time="2025-09-05T00:38:57.935827349Z" level=info msg="CreateContainer within sandbox \"be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:38:57.954910 containerd[1557]: time="2025-09-05T00:38:57.954804536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:57.959107 containerd[1557]: time="2025-09-05T00:38:57.959029329Z" level=info msg="Container a967591d361963229c16126a4dcca20ea4e94bb3fc02fc829bf73a67596a708d: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:57.959643 containerd[1557]: time="2025-09-05T00:38:57.959581391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:38:57.974954 containerd[1557]: time="2025-09-05T00:38:57.974806797Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:57.977253 containerd[1557]: time="2025-09-05T00:38:57.976998455Z" level=info msg="CreateContainer within sandbox \"be79e546bb5268b2468f0793ed49456f9022a6b4badfec1eb21aac9bc04cc8f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a967591d361963229c16126a4dcca20ea4e94bb3fc02fc829bf73a67596a708d\"" Sep 5 00:38:57.977970 containerd[1557]: time="2025-09-05T00:38:57.977904229Z" level=info msg="StartContainer for \"a967591d361963229c16126a4dcca20ea4e94bb3fc02fc829bf73a67596a708d\"" Sep 5 00:38:57.978912 containerd[1557]: time="2025-09-05T00:38:57.978832476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:38:57.979257 containerd[1557]: time="2025-09-05T00:38:57.979208510Z" level=info msg="connecting to shim a967591d361963229c16126a4dcca20ea4e94bb3fc02fc829bf73a67596a708d" address="unix:///run/containerd/s/4d6a79a75a2de52edd77eea6c007989df10ad4b0e3123905f700c06ef07d0320" protocol=ttrpc version=3 Sep 5 00:38:57.979606 containerd[1557]: time="2025-09-05T00:38:57.979540168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.946924774s" Sep 5 00:38:57.979606 containerd[1557]: time="2025-09-05T00:38:57.979603190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:38:57.980972 containerd[1557]: time="2025-09-05T00:38:57.980934593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:38:57.986547 containerd[1557]: time="2025-09-05T00:38:57.986479495Z" level=info msg="CreateContainer within sandbox \"2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:38:58.001515 containerd[1557]: time="2025-09-05T00:38:58.001456192Z" level=info msg="Container 9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:38:58.005216 systemd[1]: Started cri-containerd-a967591d361963229c16126a4dcca20ea4e94bb3fc02fc829bf73a67596a708d.scope - libcontainer container a967591d361963229c16126a4dcca20ea4e94bb3fc02fc829bf73a67596a708d. Sep 5 00:38:58.027098 containerd[1557]: time="2025-09-05T00:38:58.026995758Z" level=info msg="CreateContainer within sandbox \"2302b625794fc689a1121bd75cc715abb77ee3df91647520e80c36801acd6a1e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\"" Sep 5 00:38:58.028307 containerd[1557]: time="2025-09-05T00:38:58.028252574Z" level=info msg="StartContainer for \"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\"" Sep 5 00:38:58.031134 containerd[1557]: time="2025-09-05T00:38:58.030936938Z" level=info msg="connecting to shim 9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7" address="unix:///run/containerd/s/8a596c97a2243c28feef9271f39db37848f909d75efa9da0efa4770b927742ae" protocol=ttrpc version=3 Sep 5 00:38:58.061551 systemd[1]: Started cri-containerd-9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7.scope - libcontainer container 9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7. Sep 5 00:38:58.072100 containerd[1557]: time="2025-09-05T00:38:58.072011837Z" level=info msg="StartContainer for \"a967591d361963229c16126a4dcca20ea4e94bb3fc02fc829bf73a67596a708d\" returns successfully" Sep 5 00:38:58.139413 containerd[1557]: time="2025-09-05T00:38:58.139344051Z" level=info msg="StartContainer for \"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\" returns successfully" Sep 5 00:38:58.664920 kubelet[2750]: E0905 00:38:58.664821 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:58.751264 kubelet[2750]: I0905 00:38:58.751177 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-hc2pv" podStartSLOduration=34.802493161 podStartE2EDuration="41.751155514s" podCreationTimestamp="2025-09-05 00:38:17 +0000 UTC" firstStartedPulling="2025-09-05 00:38:51.031979165 +0000 UTC m=+57.662013403" lastFinishedPulling="2025-09-05 00:38:57.980641518 +0000 UTC m=+64.610675756" observedRunningTime="2025-09-05 00:38:58.741298725 +0000 UTC m=+65.371332963" watchObservedRunningTime="2025-09-05 00:38:58.751155514 +0000 UTC m=+65.381189752" Sep 5 00:38:58.759448 containerd[1557]: time="2025-09-05T00:38:58.759378872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\" id:\"c6d4fb47cf0e69cc92e744e8d0885ea27bb23ad7c2aba6311cea830b1e9cd668\" pid:4937 exit_status:1 exited_at:{seconds:1757032738 nanos:758859724}" Sep 5 00:38:58.823687 kubelet[2750]: I0905 00:38:58.823595 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b4zsj" podStartSLOduration=58.82357302 podStartE2EDuration="58.82357302s" podCreationTimestamp="2025-09-05 00:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:38:58.823408774 +0000 UTC m=+65.453443032" watchObservedRunningTime="2025-09-05 00:38:58.82357302 +0000 UTC m=+65.453607248" Sep 5 00:38:59.672668 kubelet[2750]: E0905 00:38:59.672628 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:38:59.750323 containerd[1557]: time="2025-09-05T00:38:59.750273574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\" id:\"2cf78e8c7dd8e0ef50f047eb4c3f851bb801da082c49d8f03aa727428ad4ca67\" pid:4969 exit_status:1 exited_at:{seconds:1757032739 nanos:749928840}" Sep 5 00:39:00.675062 kubelet[2750]: E0905 00:39:00.675026 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:01.299472 containerd[1557]: time="2025-09-05T00:39:01.299412597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:01.371262 containerd[1557]: time="2025-09-05T00:39:01.371165520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:39:01.375183 containerd[1557]: time="2025-09-05T00:39:01.375090143Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:01.377910 containerd[1557]: time="2025-09-05T00:39:01.377838681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:01.378624 containerd[1557]: time="2025-09-05T00:39:01.378564865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.397591628s" Sep 5 00:39:01.378624 containerd[1557]: time="2025-09-05T00:39:01.378604471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:39:01.380085 containerd[1557]: time="2025-09-05T00:39:01.379695755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:39:01.388764 containerd[1557]: time="2025-09-05T00:39:01.388712697Z" level=info msg="CreateContainer within sandbox \"98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:39:01.403422 containerd[1557]: time="2025-09-05T00:39:01.403342363Z" level=info msg="Container 460e5396fb19216402ae32863d5d245721e93a90762fedecb7cd5d35323ebd95: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:01.413244 containerd[1557]: time="2025-09-05T00:39:01.413191561Z" level=info msg="CreateContainer within sandbox \"98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"460e5396fb19216402ae32863d5d245721e93a90762fedecb7cd5d35323ebd95\"" Sep 5 00:39:01.413901 containerd[1557]: time="2025-09-05T00:39:01.413852340Z" level=info msg="StartContainer for \"460e5396fb19216402ae32863d5d245721e93a90762fedecb7cd5d35323ebd95\"" Sep 5 00:39:01.415225 containerd[1557]: time="2025-09-05T00:39:01.415155131Z" level=info msg="connecting to shim 460e5396fb19216402ae32863d5d245721e93a90762fedecb7cd5d35323ebd95" address="unix:///run/containerd/s/b77eb6571934b00bc81d0a99dde771ccdbe6ba57f43eb93178c595e1813c3e2d" protocol=ttrpc version=3 Sep 5 00:39:01.443980 systemd[1]: Started cri-containerd-460e5396fb19216402ae32863d5d245721e93a90762fedecb7cd5d35323ebd95.scope - libcontainer container 460e5396fb19216402ae32863d5d245721e93a90762fedecb7cd5d35323ebd95. Sep 5 00:39:01.500149 containerd[1557]: time="2025-09-05T00:39:01.500075292Z" level=info msg="StartContainer for \"460e5396fb19216402ae32863d5d245721e93a90762fedecb7cd5d35323ebd95\" returns successfully" Sep 5 00:39:02.091183 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). Sep 5 00:39:02.160938 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:02.163162 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:02.168595 systemd-logind[1536]: New session 12 of user core. Sep 5 00:39:02.177184 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:39:02.326957 sshd[5030]: Connection closed by 10.0.0.1 port 38810 Sep 5 00:39:02.327305 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:02.332610 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:38810.service: Deactivated successfully. Sep 5 00:39:02.335076 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:39:02.336128 systemd-logind[1536]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:39:02.337905 systemd-logind[1536]: Removed session 12. Sep 5 00:39:03.617132 containerd[1557]: time="2025-09-05T00:39:03.617070211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:03.617833 containerd[1557]: time="2025-09-05T00:39:03.617803858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:39:03.618978 containerd[1557]: time="2025-09-05T00:39:03.618950185Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:03.621251 containerd[1557]: time="2025-09-05T00:39:03.621220329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:03.621939 containerd[1557]: time="2025-09-05T00:39:03.621906074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.242176713s" Sep 5 00:39:03.622004 containerd[1557]: time="2025-09-05T00:39:03.621943786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:39:03.622777 containerd[1557]: time="2025-09-05T00:39:03.622691310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:39:03.630699 containerd[1557]: time="2025-09-05T00:39:03.630661372Z" level=info msg="CreateContainer within sandbox \"1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:39:03.639965 containerd[1557]: time="2025-09-05T00:39:03.639933692Z" level=info msg="Container 146662ba116caca2910d7f5e135f9368b22b14f6495bddad20e45e3f2a99603a: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:03.650603 containerd[1557]: time="2025-09-05T00:39:03.650556522Z" level=info msg="CreateContainer within sandbox \"1a255d7ade5dbf9f359539af23583617b9e083303e0fbf51528a4de23daf693d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"146662ba116caca2910d7f5e135f9368b22b14f6495bddad20e45e3f2a99603a\"" Sep 5 00:39:03.651898 containerd[1557]: time="2025-09-05T00:39:03.651062292Z" level=info msg="StartContainer for \"146662ba116caca2910d7f5e135f9368b22b14f6495bddad20e45e3f2a99603a\"" Sep 5 00:39:03.652381 containerd[1557]: time="2025-09-05T00:39:03.652351634Z" level=info msg="connecting to shim 146662ba116caca2910d7f5e135f9368b22b14f6495bddad20e45e3f2a99603a" address="unix:///run/containerd/s/4671665751661c0090058945d395671efc90b4364373fb6a7f53ccf432ad91da" protocol=ttrpc version=3 Sep 5 00:39:03.695027 systemd[1]: Started cri-containerd-146662ba116caca2910d7f5e135f9368b22b14f6495bddad20e45e3f2a99603a.scope - libcontainer container 146662ba116caca2910d7f5e135f9368b22b14f6495bddad20e45e3f2a99603a. Sep 5 00:39:03.934858 containerd[1557]: time="2025-09-05T00:39:03.934747969Z" level=info msg="StartContainer for \"146662ba116caca2910d7f5e135f9368b22b14f6495bddad20e45e3f2a99603a\" returns successfully" Sep 5 00:39:05.714265 kubelet[2750]: I0905 00:39:05.714207 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:39:06.465756 kubelet[2750]: E0905 00:39:06.465717 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:06.776882 containerd[1557]: time="2025-09-05T00:39:06.776813364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:06.777785 containerd[1557]: time="2025-09-05T00:39:06.777761459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:39:06.779112 containerd[1557]: time="2025-09-05T00:39:06.779052111Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:06.781299 containerd[1557]: time="2025-09-05T00:39:06.781267352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:06.782164 containerd[1557]: time="2025-09-05T00:39:06.782117840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.159393475s" Sep 5 00:39:06.782164 containerd[1557]: time="2025-09-05T00:39:06.782156493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:39:06.784741 containerd[1557]: time="2025-09-05T00:39:06.784719680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:39:06.798721 containerd[1557]: time="2025-09-05T00:39:06.798676279Z" level=info msg="CreateContainer within sandbox \"33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:39:06.807553 containerd[1557]: time="2025-09-05T00:39:06.807494290Z" level=info msg="Container b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:06.817608 containerd[1557]: time="2025-09-05T00:39:06.817525203Z" level=info msg="CreateContainer within sandbox \"33127d02b1a4c071c77c2d6a24b98ac58c570c19085ce8c67d0d8da91090c9b6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb\"" Sep 5 00:39:06.818581 containerd[1557]: time="2025-09-05T00:39:06.818538353Z" level=info msg="StartContainer for \"b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb\"" Sep 5 00:39:06.819933 containerd[1557]: time="2025-09-05T00:39:06.819897705Z" level=info msg="connecting to shim b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb" address="unix:///run/containerd/s/92613fac68a8dcc7a399b8baf843a7b938f0d077aca45bd016c452fe3d2e10c2" protocol=ttrpc version=3 Sep 5 00:39:06.862034 systemd[1]: Started cri-containerd-b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb.scope - libcontainer container b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb. Sep 5 00:39:06.925534 containerd[1557]: time="2025-09-05T00:39:06.925472845Z" level=info msg="StartContainer for \"b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb\" returns successfully" Sep 5 00:39:07.185367 kubelet[2750]: I0905 00:39:07.185213 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:39:07.350994 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Sep 5 00:39:07.398660 kubelet[2750]: I0905 00:39:07.398460 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-698d6f7d76-wmjmq" podStartSLOduration=46.763244501 podStartE2EDuration="56.398439782s" podCreationTimestamp="2025-09-05 00:38:11 +0000 UTC" firstStartedPulling="2025-09-05 00:38:53.98735573 +0000 UTC m=+60.617389968" lastFinishedPulling="2025-09-05 00:39:03.622551011 +0000 UTC m=+70.252585249" observedRunningTime="2025-09-05 00:39:04.847578674 +0000 UTC m=+71.477612912" watchObservedRunningTime="2025-09-05 00:39:07.398439782 +0000 UTC m=+74.028474020" Sep 5 00:39:07.450246 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:07.450680 containerd[1557]: time="2025-09-05T00:39:07.449483929Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:07.452282 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:07.452611 containerd[1557]: time="2025-09-05T00:39:07.452354251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:39:07.456723 containerd[1557]: time="2025-09-05T00:39:07.456666733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 671.844586ms" Sep 5 00:39:07.456832 containerd[1557]: time="2025-09-05T00:39:07.456729212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:39:07.458379 containerd[1557]: time="2025-09-05T00:39:07.458338291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:39:07.458934 systemd-logind[1536]: New session 13 of user core. Sep 5 00:39:07.466124 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:39:07.469215 containerd[1557]: time="2025-09-05T00:39:07.469152733Z" level=info msg="CreateContainer within sandbox \"13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:39:07.501120 containerd[1557]: time="2025-09-05T00:39:07.501070625Z" level=info msg="Container d3d409cb417268cb8a2e10947c94497f036ebba2a65ab3ebba66863eabd46eea: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:07.519534 containerd[1557]: time="2025-09-05T00:39:07.519481121Z" level=info msg="CreateContainer within sandbox \"13ad0c2afafd0eebf2bcede3b9b4458387caa43ad9b14abe858eef93b6667a60\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3d409cb417268cb8a2e10947c94497f036ebba2a65ab3ebba66863eabd46eea\"" Sep 5 00:39:07.520881 containerd[1557]: time="2025-09-05T00:39:07.520832898Z" level=info msg="StartContainer for \"d3d409cb417268cb8a2e10947c94497f036ebba2a65ab3ebba66863eabd46eea\"" Sep 5 00:39:07.525478 containerd[1557]: time="2025-09-05T00:39:07.525060848Z" level=info msg="connecting to shim d3d409cb417268cb8a2e10947c94497f036ebba2a65ab3ebba66863eabd46eea" address="unix:///run/containerd/s/eba92f8b479182444dbcf71d09dba4e8915a8157e45add3127282e3015422891" protocol=ttrpc version=3 Sep 5 00:39:07.553328 systemd[1]: Started cri-containerd-d3d409cb417268cb8a2e10947c94497f036ebba2a65ab3ebba66863eabd46eea.scope - libcontainer container d3d409cb417268cb8a2e10947c94497f036ebba2a65ab3ebba66863eabd46eea. Sep 5 00:39:07.612943 containerd[1557]: time="2025-09-05T00:39:07.612850163Z" level=info msg="StartContainer for \"d3d409cb417268cb8a2e10947c94497f036ebba2a65ab3ebba66863eabd46eea\" returns successfully" Sep 5 00:39:07.637916 sshd[5151]: Connection closed by 10.0.0.1 port 38822 Sep 5 00:39:07.637665 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:07.643055 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:38822.service: Deactivated successfully. Sep 5 00:39:07.645263 systemd-logind[1536]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:39:07.646695 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:39:07.649935 systemd-logind[1536]: Removed session 13. Sep 5 00:39:07.744915 kubelet[2750]: I0905 00:39:07.744492 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-698d6f7d76-9qbb6" podStartSLOduration=46.960987319 podStartE2EDuration="56.744473099s" podCreationTimestamp="2025-09-05 00:38:11 +0000 UTC" firstStartedPulling="2025-09-05 00:38:57.674131852 +0000 UTC m=+64.304166090" lastFinishedPulling="2025-09-05 00:39:07.457617632 +0000 UTC m=+74.087651870" observedRunningTime="2025-09-05 00:39:07.744025703 +0000 UTC m=+74.374059951" watchObservedRunningTime="2025-09-05 00:39:07.744473099 +0000 UTC m=+74.374507337" Sep 5 00:39:07.799645 containerd[1557]: time="2025-09-05T00:39:07.799558008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb\" id:\"713ba9029726ce8a3b0812082fe905ee23c5407855dc4623de5577e64ad1974a\" pid:5218 exited_at:{seconds:1757032747 nanos:799271450}" Sep 5 00:39:07.970027 kubelet[2750]: I0905 00:39:07.968521 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86d758f56b-zdnzt" podStartSLOduration=39.070747077 podStartE2EDuration="50.968498618s" podCreationTimestamp="2025-09-05 00:38:17 +0000 UTC" firstStartedPulling="2025-09-05 00:38:54.886846337 +0000 UTC m=+61.516880575" lastFinishedPulling="2025-09-05 00:39:06.784597838 +0000 UTC m=+73.414632116" observedRunningTime="2025-09-05 00:39:07.968426841 +0000 UTC m=+74.598461079" watchObservedRunningTime="2025-09-05 00:39:07.968498618 +0000 UTC m=+74.598532856" Sep 5 00:39:08.727302 kubelet[2750]: I0905 00:39:08.727255 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:39:10.934665 containerd[1557]: time="2025-09-05T00:39:10.934566508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:10.936091 containerd[1557]: time="2025-09-05T00:39:10.936054041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:39:10.937537 containerd[1557]: time="2025-09-05T00:39:10.937496026Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:10.939586 containerd[1557]: time="2025-09-05T00:39:10.939543798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:10.940067 containerd[1557]: time="2025-09-05T00:39:10.939983628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.481608265s" Sep 5 00:39:10.940067 containerd[1557]: time="2025-09-05T00:39:10.940030227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:39:10.941571 containerd[1557]: time="2025-09-05T00:39:10.941520465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:39:10.945616 containerd[1557]: time="2025-09-05T00:39:10.945578658Z" level=info msg="CreateContainer within sandbox \"8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:39:10.979668 containerd[1557]: time="2025-09-05T00:39:10.979466595Z" level=info msg="Container 1940af00e233c072f0ff6bf54a59410e1366d52802c7adc4ba9bb1f6e8c5a3ab: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:10.992490 containerd[1557]: time="2025-09-05T00:39:10.992422189Z" level=info msg="CreateContainer within sandbox \"8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1940af00e233c072f0ff6bf54a59410e1366d52802c7adc4ba9bb1f6e8c5a3ab\"" Sep 5 00:39:10.994082 containerd[1557]: time="2025-09-05T00:39:10.994024069Z" level=info msg="StartContainer for \"1940af00e233c072f0ff6bf54a59410e1366d52802c7adc4ba9bb1f6e8c5a3ab\"" Sep 5 00:39:10.995812 containerd[1557]: time="2025-09-05T00:39:10.995784021Z" level=info msg="connecting to shim 1940af00e233c072f0ff6bf54a59410e1366d52802c7adc4ba9bb1f6e8c5a3ab" address="unix:///run/containerd/s/7c5cb853f4982ab82eba29738f03eb52054554de7699ba02bec54d1d98a1f0da" protocol=ttrpc version=3 Sep 5 00:39:11.026069 systemd[1]: Started cri-containerd-1940af00e233c072f0ff6bf54a59410e1366d52802c7adc4ba9bb1f6e8c5a3ab.scope - libcontainer container 1940af00e233c072f0ff6bf54a59410e1366d52802c7adc4ba9bb1f6e8c5a3ab. Sep 5 00:39:11.077354 containerd[1557]: time="2025-09-05T00:39:11.077295111Z" level=info msg="StartContainer for \"1940af00e233c072f0ff6bf54a59410e1366d52802c7adc4ba9bb1f6e8c5a3ab\" returns successfully" Sep 5 00:39:12.465574 kubelet[2750]: E0905 00:39:12.465532 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:12.656469 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:50812.service - OpenSSH per-connection server daemon (10.0.0.1:50812). Sep 5 00:39:12.742187 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 50812 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:12.744497 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:12.752303 systemd-logind[1536]: New session 14 of user core. Sep 5 00:39:12.760144 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:39:12.937020 sshd[5274]: Connection closed by 10.0.0.1 port 50812 Sep 5 00:39:12.937579 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:12.949474 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:50812.service: Deactivated successfully. Sep 5 00:39:12.952353 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:39:12.953679 systemd-logind[1536]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:39:12.958579 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:50826.service - OpenSSH per-connection server daemon (10.0.0.1:50826). Sep 5 00:39:12.959520 systemd-logind[1536]: Removed session 14. Sep 5 00:39:13.014621 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 50826 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:13.016688 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:13.022530 systemd-logind[1536]: New session 15 of user core. Sep 5 00:39:13.036111 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:39:13.375191 sshd[5293]: Connection closed by 10.0.0.1 port 50826 Sep 5 00:39:13.376108 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:13.387731 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:50826.service: Deactivated successfully. Sep 5 00:39:13.390037 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:39:13.390936 systemd-logind[1536]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:39:13.394382 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:50832.service - OpenSSH per-connection server daemon (10.0.0.1:50832). Sep 5 00:39:13.396626 systemd-logind[1536]: Removed session 15. Sep 5 00:39:13.457271 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 50832 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:13.460495 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:13.466255 systemd-logind[1536]: New session 16 of user core. Sep 5 00:39:13.484071 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:39:13.631767 sshd[5307]: Connection closed by 10.0.0.1 port 50832 Sep 5 00:39:13.632107 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:13.637812 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:50832.service: Deactivated successfully. Sep 5 00:39:13.640138 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:39:13.641027 systemd-logind[1536]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:39:13.642649 systemd-logind[1536]: Removed session 16. Sep 5 00:39:14.519372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318321561.mount: Deactivated successfully. Sep 5 00:39:15.420909 containerd[1557]: time="2025-09-05T00:39:15.420796173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:15.425095 containerd[1557]: time="2025-09-05T00:39:15.425005979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:39:15.430490 containerd[1557]: time="2025-09-05T00:39:15.430406526Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:15.438159 containerd[1557]: time="2025-09-05T00:39:15.437932935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:15.439032 containerd[1557]: time="2025-09-05T00:39:15.438982797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.497408869s" Sep 5 00:39:15.439032 containerd[1557]: time="2025-09-05T00:39:15.439027011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:39:15.440443 containerd[1557]: time="2025-09-05T00:39:15.440402242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:39:15.459693 containerd[1557]: time="2025-09-05T00:39:15.459619941Z" level=info msg="CreateContainer within sandbox \"98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:39:15.521104 containerd[1557]: time="2025-09-05T00:39:15.521030933Z" level=info msg="Container 3f118aad9ebb34d2d586116344780c4ed5071d2ff152a45a885d34e6be8d3013: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:15.534514 containerd[1557]: time="2025-09-05T00:39:15.534450900Z" level=info msg="CreateContainer within sandbox \"98204b523ba0c0f06d8f3be14cdfbf425423e3eb32afdc01c74535d509a0278a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3f118aad9ebb34d2d586116344780c4ed5071d2ff152a45a885d34e6be8d3013\"" Sep 5 00:39:15.535096 containerd[1557]: time="2025-09-05T00:39:15.535064700Z" level=info msg="StartContainer for \"3f118aad9ebb34d2d586116344780c4ed5071d2ff152a45a885d34e6be8d3013\"" Sep 5 00:39:15.536215 containerd[1557]: time="2025-09-05T00:39:15.536188412Z" level=info msg="connecting to shim 3f118aad9ebb34d2d586116344780c4ed5071d2ff152a45a885d34e6be8d3013" address="unix:///run/containerd/s/b77eb6571934b00bc81d0a99dde771ccdbe6ba57f43eb93178c595e1813c3e2d" protocol=ttrpc version=3 Sep 5 00:39:15.565035 systemd[1]: Started cri-containerd-3f118aad9ebb34d2d586116344780c4ed5071d2ff152a45a885d34e6be8d3013.scope - libcontainer container 3f118aad9ebb34d2d586116344780c4ed5071d2ff152a45a885d34e6be8d3013. Sep 5 00:39:15.617641 containerd[1557]: time="2025-09-05T00:39:15.617592501Z" level=info msg="StartContainer for \"3f118aad9ebb34d2d586116344780c4ed5071d2ff152a45a885d34e6be8d3013\" returns successfully" Sep 5 00:39:15.760813 kubelet[2750]: I0905 00:39:15.760726 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-568cf579f5-58wll" podStartSLOduration=2.548449813 podStartE2EDuration="26.760707945s" podCreationTimestamp="2025-09-05 00:38:49 +0000 UTC" firstStartedPulling="2025-09-05 00:38:51.227916426 +0000 UTC m=+57.857950665" lastFinishedPulling="2025-09-05 00:39:15.440174559 +0000 UTC m=+82.070208797" observedRunningTime="2025-09-05 00:39:15.759783653 +0000 UTC m=+82.389817891" watchObservedRunningTime="2025-09-05 00:39:15.760707945 +0000 UTC m=+82.390742183" Sep 5 00:39:17.241274 containerd[1557]: time="2025-09-05T00:39:17.241201873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:17.257906 containerd[1557]: time="2025-09-05T00:39:17.257833317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:39:17.264043 containerd[1557]: time="2025-09-05T00:39:17.264014573Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:17.266533 containerd[1557]: time="2025-09-05T00:39:17.266504957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:39:17.267109 containerd[1557]: time="2025-09-05T00:39:17.267067739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.826625842s" Sep 5 00:39:17.267172 containerd[1557]: time="2025-09-05T00:39:17.267111733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:39:17.272170 containerd[1557]: time="2025-09-05T00:39:17.272125896Z" level=info msg="CreateContainer within sandbox \"8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:39:17.281970 containerd[1557]: time="2025-09-05T00:39:17.281906819Z" level=info msg="Container c6bebe20cdebab87b849360cd51ecf87d457b174b732caa97b7597ab85c80803: CDI devices from CRI Config.CDIDevices: []" Sep 5 00:39:17.294450 containerd[1557]: time="2025-09-05T00:39:17.294390652Z" level=info msg="CreateContainer within sandbox \"8dc57bd4c5ece4526cb94e165942247fa4b37c73158886841a5344fd6e0172a4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c6bebe20cdebab87b849360cd51ecf87d457b174b732caa97b7597ab85c80803\"" Sep 5 00:39:17.295277 containerd[1557]: time="2025-09-05T00:39:17.295210203Z" level=info msg="StartContainer for \"c6bebe20cdebab87b849360cd51ecf87d457b174b732caa97b7597ab85c80803\"" Sep 5 00:39:17.299116 containerd[1557]: time="2025-09-05T00:39:17.298362619Z" level=info msg="connecting to shim c6bebe20cdebab87b849360cd51ecf87d457b174b732caa97b7597ab85c80803" address="unix:///run/containerd/s/7c5cb853f4982ab82eba29738f03eb52054554de7699ba02bec54d1d98a1f0da" protocol=ttrpc version=3 Sep 5 00:39:17.331102 systemd[1]: Started cri-containerd-c6bebe20cdebab87b849360cd51ecf87d457b174b732caa97b7597ab85c80803.scope - libcontainer container c6bebe20cdebab87b849360cd51ecf87d457b174b732caa97b7597ab85c80803. Sep 5 00:39:17.387468 containerd[1557]: time="2025-09-05T00:39:17.387417845Z" level=info msg="StartContainer for \"c6bebe20cdebab87b849360cd51ecf87d457b174b732caa97b7597ab85c80803\" returns successfully" Sep 5 00:39:17.568196 kubelet[2750]: I0905 00:39:17.567685 2750 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:39:17.574277 kubelet[2750]: I0905 00:39:17.574240 2750 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:39:17.774566 kubelet[2750]: I0905 00:39:17.774471 2750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vhkp8" podStartSLOduration=41.233285569 podStartE2EDuration="1m0.774455199s" podCreationTimestamp="2025-09-05 00:38:17 +0000 UTC" firstStartedPulling="2025-09-05 00:38:57.726820848 +0000 UTC m=+64.356855076" lastFinishedPulling="2025-09-05 00:39:17.267990448 +0000 UTC m=+83.898024706" observedRunningTime="2025-09-05 00:39:17.774121654 +0000 UTC m=+84.404155892" watchObservedRunningTime="2025-09-05 00:39:17.774455199 +0000 UTC m=+84.404489437" Sep 5 00:39:18.649824 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:50840.service - OpenSSH per-connection server daemon (10.0.0.1:50840). Sep 5 00:39:18.736637 sshd[5409]: Accepted publickey for core from 10.0.0.1 port 50840 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:18.739133 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:18.745457 systemd-logind[1536]: New session 17 of user core. Sep 5 00:39:18.752089 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:39:18.971544 sshd[5412]: Connection closed by 10.0.0.1 port 50840 Sep 5 00:39:18.971889 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:18.978022 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:50840.service: Deactivated successfully. Sep 5 00:39:18.980686 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:39:18.984120 systemd-logind[1536]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:39:18.986274 systemd-logind[1536]: Removed session 17. Sep 5 00:39:19.766946 containerd[1557]: time="2025-09-05T00:39:19.766838927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2\" id:\"e5e0196247ab5815d9c2b45040023562bac4250ddf4ed3ab96a8789e37a0ad5e\" pid:5438 exit_status:1 exited_at:{seconds:1757032759 nanos:766358894}" Sep 5 00:39:23.986515 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:59806.service - OpenSSH per-connection server daemon (10.0.0.1:59806). Sep 5 00:39:24.054120 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 59806 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:24.056109 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:24.063093 systemd-logind[1536]: New session 18 of user core. Sep 5 00:39:24.080183 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:39:24.208560 sshd[5457]: Connection closed by 10.0.0.1 port 59806 Sep 5 00:39:24.208953 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:24.214264 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:59806.service: Deactivated successfully. Sep 5 00:39:24.216529 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:39:24.217381 systemd-logind[1536]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:39:24.219081 systemd-logind[1536]: Removed session 18. Sep 5 00:39:26.465450 kubelet[2750]: E0905 00:39:26.465394 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:29.227555 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:59816.service - OpenSSH per-connection server daemon (10.0.0.1:59816). Sep 5 00:39:29.279780 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 59816 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:29.281559 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:29.286359 systemd-logind[1536]: New session 19 of user core. Sep 5 00:39:29.298244 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:39:29.419386 sshd[5474]: Connection closed by 10.0.0.1 port 59816 Sep 5 00:39:29.419988 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:29.426932 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:59816.service: Deactivated successfully. Sep 5 00:39:29.429405 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:39:29.430518 systemd-logind[1536]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:39:29.432607 systemd-logind[1536]: Removed session 19. Sep 5 00:39:29.936620 containerd[1557]: time="2025-09-05T00:39:29.936527321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\" id:\"1a70ce48b7c68188fcc8c20c552bc4173e0c2c7fc54753c62b2dd83387c5cdc3\" pid:5498 exited_at:{seconds:1757032769 nanos:936193758}" Sep 5 00:39:30.465838 kubelet[2750]: E0905 00:39:30.465790 2750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:39:30.776421 kubelet[2750]: I0905 00:39:30.776367 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:39:34.436894 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:54244.service - OpenSSH per-connection server daemon (10.0.0.1:54244). Sep 5 00:39:34.517564 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 54244 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:34.519501 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:34.525285 systemd-logind[1536]: New session 20 of user core. Sep 5 00:39:34.535144 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:39:34.681301 sshd[5528]: Connection closed by 10.0.0.1 port 54244 Sep 5 00:39:34.680235 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:34.691370 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:54244.service: Deactivated successfully. Sep 5 00:39:34.694501 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:39:34.696086 systemd-logind[1536]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:39:34.697576 systemd-logind[1536]: Removed session 20. Sep 5 00:39:37.777369 containerd[1557]: time="2025-09-05T00:39:37.777281610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b440e2136d194bc381100e4c930dbc97edbb111dfaed1905580a43285be63fdb\" id:\"7e2c874bba5b38609d60a6d2536ba7ca5df6be969879aff51fb3c1537f208675\" pid:5553 exited_at:{seconds:1757032777 nanos:776759892}" Sep 5 00:39:39.698583 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:54256.service - OpenSSH per-connection server daemon (10.0.0.1:54256). Sep 5 00:39:39.914622 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 54256 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:39.916467 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:39.921257 systemd-logind[1536]: New session 21 of user core. Sep 5 00:39:39.931072 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:39:40.071115 sshd[5567]: Connection closed by 10.0.0.1 port 54256 Sep 5 00:39:40.071623 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:40.083534 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:54256.service: Deactivated successfully. Sep 5 00:39:40.086544 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:39:40.087501 systemd-logind[1536]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:39:40.092410 systemd[1]: Started sshd@21-10.0.0.120:22-10.0.0.1:39454.service - OpenSSH per-connection server daemon (10.0.0.1:39454). Sep 5 00:39:40.093362 systemd-logind[1536]: Removed session 21. Sep 5 00:39:40.160894 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 39454 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:40.163007 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:40.168619 systemd-logind[1536]: New session 22 of user core. Sep 5 00:39:40.179209 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:39:40.615245 sshd[5583]: Connection closed by 10.0.0.1 port 39454 Sep 5 00:39:40.616024 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:40.629903 systemd[1]: sshd@21-10.0.0.120:22-10.0.0.1:39454.service: Deactivated successfully. Sep 5 00:39:40.632177 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:39:40.633276 systemd-logind[1536]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:39:40.637325 systemd[1]: Started sshd@22-10.0.0.120:22-10.0.0.1:39466.service - OpenSSH per-connection server daemon (10.0.0.1:39466). Sep 5 00:39:40.638216 systemd-logind[1536]: Removed session 22. Sep 5 00:39:40.707642 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 39466 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:40.709545 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:40.715396 systemd-logind[1536]: New session 23 of user core. Sep 5 00:39:40.727019 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:39:41.537787 sshd[5598]: Connection closed by 10.0.0.1 port 39466 Sep 5 00:39:41.539294 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:41.554020 systemd[1]: sshd@22-10.0.0.120:22-10.0.0.1:39466.service: Deactivated successfully. Sep 5 00:39:41.557159 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:39:41.559644 systemd-logind[1536]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:39:41.564184 systemd[1]: Started sshd@23-10.0.0.120:22-10.0.0.1:39474.service - OpenSSH per-connection server daemon (10.0.0.1:39474). Sep 5 00:39:41.568103 systemd-logind[1536]: Removed session 23. Sep 5 00:39:41.626291 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:41.628304 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:41.633591 systemd-logind[1536]: New session 24 of user core. Sep 5 00:39:41.642118 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:39:42.114242 sshd[5632]: Connection closed by 10.0.0.1 port 39474 Sep 5 00:39:42.115097 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:42.127131 systemd[1]: sshd@23-10.0.0.120:22-10.0.0.1:39474.service: Deactivated successfully. Sep 5 00:39:42.131229 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:39:42.134138 systemd-logind[1536]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:39:42.136823 systemd[1]: Started sshd@24-10.0.0.120:22-10.0.0.1:39490.service - OpenSSH per-connection server daemon (10.0.0.1:39490). Sep 5 00:39:42.138630 systemd-logind[1536]: Removed session 24. Sep 5 00:39:42.205760 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 39490 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:42.207647 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:42.212665 systemd-logind[1536]: New session 25 of user core. Sep 5 00:39:42.234120 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:39:42.355771 sshd[5646]: Connection closed by 10.0.0.1 port 39490 Sep 5 00:39:42.356229 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:42.361999 systemd[1]: sshd@24-10.0.0.120:22-10.0.0.1:39490.service: Deactivated successfully. Sep 5 00:39:42.364186 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:39:42.364982 systemd-logind[1536]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:39:42.367116 systemd-logind[1536]: Removed session 25. Sep 5 00:39:47.375250 systemd[1]: Started sshd@25-10.0.0.120:22-10.0.0.1:39496.service - OpenSSH per-connection server daemon (10.0.0.1:39496). Sep 5 00:39:47.438399 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 39496 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:47.440386 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:47.445924 systemd-logind[1536]: New session 26 of user core. Sep 5 00:39:47.454067 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:39:47.583438 sshd[5662]: Connection closed by 10.0.0.1 port 39496 Sep 5 00:39:47.584103 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:47.590797 systemd[1]: sshd@25-10.0.0.120:22-10.0.0.1:39496.service: Deactivated successfully. Sep 5 00:39:47.593510 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:39:47.596064 systemd-logind[1536]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:39:47.597857 systemd-logind[1536]: Removed session 26. Sep 5 00:39:49.713313 containerd[1557]: time="2025-09-05T00:39:49.713260303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff167fcb2fbd9de81949c5f9a9391a46aa0d6d78f5123c385b283292f0f379a2\" id:\"022d795fe57605ec72e199bccaddfe48663aeedcc6280371d4c3ff1cc334110d\" pid:5689 exited_at:{seconds:1757032789 nanos:711649698}" Sep 5 00:39:50.317285 containerd[1557]: time="2025-09-05T00:39:50.317222125Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\" id:\"b07ed675f08e0544fadbce9d407537edfed8c5515be96806d8d86d864f73eb91\" pid:5713 exited_at:{seconds:1757032790 nanos:316892772}" Sep 5 00:39:52.598480 systemd[1]: Started sshd@26-10.0.0.120:22-10.0.0.1:46568.service - OpenSSH per-connection server daemon (10.0.0.1:46568). Sep 5 00:39:52.671441 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 46568 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:52.673791 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:52.679435 systemd-logind[1536]: New session 27 of user core. Sep 5 00:39:52.690171 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:39:52.818603 sshd[5729]: Connection closed by 10.0.0.1 port 46568 Sep 5 00:39:52.819004 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:52.824549 systemd[1]: sshd@26-10.0.0.120:22-10.0.0.1:46568.service: Deactivated successfully. Sep 5 00:39:52.827240 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:39:52.828164 systemd-logind[1536]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:39:52.830378 systemd-logind[1536]: Removed session 27. Sep 5 00:39:57.833759 systemd[1]: Started sshd@27-10.0.0.120:22-10.0.0.1:46570.service - OpenSSH per-connection server daemon (10.0.0.1:46570). Sep 5 00:39:57.911911 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 46570 ssh2: RSA SHA256:KywQL09xehbue1E4emvbEQFRUA5soTXlPLenbFqvKX8 Sep 5 00:39:57.915223 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:39:57.920183 systemd-logind[1536]: New session 28 of user core. Sep 5 00:39:57.928091 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 00:39:58.099060 sshd[5750]: Connection closed by 10.0.0.1 port 46570 Sep 5 00:39:58.101315 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Sep 5 00:39:58.106009 systemd[1]: sshd@27-10.0.0.120:22-10.0.0.1:46570.service: Deactivated successfully. Sep 5 00:39:58.108583 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 00:39:58.110147 systemd-logind[1536]: Session 28 logged out. Waiting for processes to exit. Sep 5 00:39:58.112092 systemd-logind[1536]: Removed session 28. Sep 5 00:39:59.840613 containerd[1557]: time="2025-09-05T00:39:59.840559835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9703a559ba7d1dfe046f1046f04d26e6fdee48f0725d9f8cc5c9741295209ae7\" id:\"69efa26202d13addbe41db78329d7d5f4a806db184feeb8e404ff22dbc7b96e0\" pid:5775 exited_at:{seconds:1757032799 nanos:840139060}"