Sep 11 00:32:29.816961 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:32:29.816980 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:32:29.816992 kernel: BIOS-provided physical RAM map: Sep 11 00:32:29.816998 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:32:29.817005 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:32:29.817011 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:32:29.817019 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:32:29.817025 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:32:29.817034 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:32:29.817040 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:32:29.817047 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 11 00:32:29.817054 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:32:29.817060 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:32:29.817148 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:32:29.817160 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:32:29.817167 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:32:29.817174 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:32:29.817181 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:32:29.817188 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:32:29.817195 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:32:29.817202 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:32:29.817209 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:32:29.817216 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:32:29.817223 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:32:29.817230 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:32:29.817239 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:32:29.817246 kernel: NX (Execute Disable) protection: active Sep 11 00:32:29.817253 kernel: APIC: Static calls initialized Sep 11 00:32:29.817260 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 11 00:32:29.817267 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 11 00:32:29.817274 kernel: extended physical RAM map: Sep 11 00:32:29.817281 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:32:29.817288 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:32:29.817296 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:32:29.817303 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:32:29.817310 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:32:29.817319 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:32:29.817326 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:32:29.817333 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 11 00:32:29.817340 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 11 00:32:29.817350 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 11 00:32:29.817358 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 11 00:32:29.817367 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 11 00:32:29.817374 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:32:29.817382 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:32:29.817389 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:32:29.817396 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:32:29.817404 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:32:29.817411 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:32:29.817418 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:32:29.817426 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:32:29.817435 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:32:29.817442 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:32:29.817449 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:32:29.817457 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:32:29.817464 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:32:29.817471 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:32:29.817478 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:32:29.817486 kernel: efi: EFI v2.7 by EDK II Sep 11 00:32:29.817493 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 11 00:32:29.817500 kernel: random: crng init done Sep 11 00:32:29.817508 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 11 00:32:29.817515 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 11 00:32:29.817524 kernel: secureboot: Secure boot disabled Sep 11 00:32:29.817532 kernel: SMBIOS 2.8 present. Sep 11 00:32:29.817539 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 11 00:32:29.817546 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:32:29.817553 kernel: Hypervisor detected: KVM Sep 11 00:32:29.817561 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:32:29.817568 kernel: kvm-clock: using sched offset of 5430391191 cycles Sep 11 00:32:29.817576 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:32:29.817584 kernel: tsc: Detected 2794.748 MHz processor Sep 11 00:32:29.817591 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:32:29.817601 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:32:29.817608 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 11 00:32:29.817616 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:32:29.817631 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:32:29.817639 kernel: Using GB pages for direct mapping Sep 11 00:32:29.817646 kernel: ACPI: Early table checksum verification disabled Sep 11 00:32:29.817654 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 11 00:32:29.817662 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:32:29.817669 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:32:29.817679 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:32:29.817687 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 11 00:32:29.817694 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:32:29.817702 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:32:29.817709 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:32:29.817717 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:32:29.817724 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 11 00:32:29.817732 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 11 00:32:29.817739 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 11 00:32:29.817748 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 11 00:32:29.817756 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 11 00:32:29.817763 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 11 00:32:29.817771 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 11 00:32:29.817778 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 11 00:32:29.817785 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 11 00:32:29.817793 kernel: No NUMA configuration found Sep 11 00:32:29.817800 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 11 00:32:29.817808 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 11 00:32:29.817817 kernel: Zone ranges: Sep 11 00:32:29.817824 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:32:29.817832 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 11 00:32:29.817839 kernel: Normal empty Sep 11 00:32:29.817847 kernel: Device empty Sep 11 00:32:29.817854 kernel: Movable zone start for each node Sep 11 00:32:29.817861 kernel: Early memory node ranges Sep 11 00:32:29.817869 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 11 00:32:29.817876 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 11 00:32:29.817883 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 11 00:32:29.817893 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 11 00:32:29.817900 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 11 00:32:29.817907 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 11 00:32:29.817915 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 11 00:32:29.817922 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 11 00:32:29.817930 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 11 00:32:29.817937 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:32:29.817945 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 11 00:32:29.817961 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 11 00:32:29.817969 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:32:29.817976 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 11 00:32:29.817984 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 11 00:32:29.817994 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 11 00:32:29.818001 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 11 00:32:29.818009 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 11 00:32:29.818017 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:32:29.818025 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:32:29.818034 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:32:29.818042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:32:29.818050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:32:29.818058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:32:29.818066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:32:29.818087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:32:29.818094 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:32:29.818102 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:32:29.818112 kernel: TSC deadline timer available Sep 11 00:32:29.818120 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:32:29.818128 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:32:29.818135 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:32:29.818143 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:32:29.818153 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:32:29.818164 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:32:29.818174 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:32:29.818185 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:32:29.818195 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:32:29.818205 kernel: kvm-guest: setup PV sched yield Sep 11 00:32:29.818213 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 11 00:32:29.818220 kernel: Booting paravirtualized kernel on KVM Sep 11 00:32:29.818228 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:32:29.818236 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:32:29.818244 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:32:29.818252 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:32:29.818260 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:32:29.818267 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:32:29.818277 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:32:29.818286 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:32:29.818294 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:32:29.818302 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:32:29.818310 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:32:29.818318 kernel: Fallback order for Node 0: 0 Sep 11 00:32:29.818325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 11 00:32:29.818333 kernel: Policy zone: DMA32 Sep 11 00:32:29.818343 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:32:29.818351 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:32:29.818358 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:32:29.818366 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:32:29.818374 kernel: Dynamic Preempt: voluntary Sep 11 00:32:29.818381 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:32:29.818390 kernel: rcu: RCU event tracing is enabled. Sep 11 00:32:29.818398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:32:29.818406 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:32:29.818415 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:32:29.818423 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:32:29.818431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:32:29.818439 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:32:29.818447 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:32:29.818455 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:32:29.818463 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:32:29.818470 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:32:29.818478 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:32:29.818488 kernel: Console: colour dummy device 80x25 Sep 11 00:32:29.818495 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:32:29.818503 kernel: ACPI: Core revision 20240827 Sep 11 00:32:29.818511 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:32:29.818519 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:32:29.818527 kernel: x2apic enabled Sep 11 00:32:29.818534 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:32:29.818542 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:32:29.818550 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:32:29.818560 kernel: kvm-guest: setup PV IPIs Sep 11 00:32:29.818567 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:32:29.818575 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:32:29.818583 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 11 00:32:29.818591 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:32:29.818599 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:32:29.818607 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:32:29.818614 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:32:29.818630 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:32:29.818640 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:32:29.818648 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:32:29.818656 kernel: active return thunk: retbleed_return_thunk Sep 11 00:32:29.818664 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:32:29.818672 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:32:29.818680 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:32:29.818688 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:32:29.818696 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:32:29.818706 kernel: active return thunk: srso_return_thunk Sep 11 00:32:29.818714 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:32:29.818722 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:32:29.818730 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:32:29.818738 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:32:29.818745 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:32:29.818753 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:32:29.818761 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:32:29.818769 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:32:29.818778 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:32:29.818786 kernel: landlock: Up and running. Sep 11 00:32:29.818794 kernel: SELinux: Initializing. Sep 11 00:32:29.818802 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:32:29.818810 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:32:29.818817 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:32:29.818825 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:32:29.818833 kernel: ... version: 0 Sep 11 00:32:29.818841 kernel: ... bit width: 48 Sep 11 00:32:29.818850 kernel: ... generic registers: 6 Sep 11 00:32:29.818858 kernel: ... value mask: 0000ffffffffffff Sep 11 00:32:29.818866 kernel: ... max period: 00007fffffffffff Sep 11 00:32:29.818873 kernel: ... fixed-purpose events: 0 Sep 11 00:32:29.818881 kernel: ... event mask: 000000000000003f Sep 11 00:32:29.818889 kernel: signal: max sigframe size: 1776 Sep 11 00:32:29.818897 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:32:29.818904 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:32:29.818912 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:32:29.818922 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:32:29.818930 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:32:29.818938 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:32:29.818946 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:32:29.818953 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 11 00:32:29.818962 kernel: Memory: 2424720K/2565800K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 11 00:32:29.818969 kernel: devtmpfs: initialized Sep 11 00:32:29.818977 kernel: x86/mm: Memory block size: 128MB Sep 11 00:32:29.818985 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 11 00:32:29.818995 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 11 00:32:29.819003 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 11 00:32:29.819010 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 11 00:32:29.819018 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 11 00:32:29.819026 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 11 00:32:29.819034 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:32:29.819042 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:32:29.819050 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:32:29.819057 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:32:29.819111 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:32:29.819120 kernel: audit: type=2000 audit(1757550747.569:1): state=initialized audit_enabled=0 res=1 Sep 11 00:32:29.819127 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:32:29.819135 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:32:29.819143 kernel: cpuidle: using governor menu Sep 11 00:32:29.819151 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:32:29.819158 kernel: dca service started, version 1.12.1 Sep 11 00:32:29.819166 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 11 00:32:29.819174 kernel: PCI: Using configuration type 1 for base access Sep 11 00:32:29.819185 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:32:29.819193 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:32:29.819201 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:32:29.819209 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:32:29.819217 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:32:29.819224 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:32:29.819232 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:32:29.819240 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:32:29.819248 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:32:29.819257 kernel: ACPI: Interpreter enabled Sep 11 00:32:29.819265 kernel: ACPI: PM: (supports S0 S3 S5) Sep 11 00:32:29.819273 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:32:29.819281 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:32:29.819289 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:32:29.819296 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:32:29.819304 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:32:29.819487 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:32:29.819612 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:32:29.819739 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:32:29.819750 kernel: PCI host bridge to bus 0000:00 Sep 11 00:32:29.819875 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:32:29.819982 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:32:29.820109 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:32:29.820217 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 11 00:32:29.820326 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 11 00:32:29.820430 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:32:29.820535 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:32:29.820676 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:32:29.820804 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:32:29.820921 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 11 00:32:29.821042 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 11 00:32:29.821206 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 11 00:32:29.821325 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:32:29.821452 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:32:29.821569 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 11 00:32:29.821694 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 11 00:32:29.821819 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 11 00:32:29.821977 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:32:29.822121 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 11 00:32:29.822289 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 11 00:32:29.822414 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 11 00:32:29.822541 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:32:29.822671 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 11 00:32:29.822788 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 11 00:32:29.822909 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 11 00:32:29.823033 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 11 00:32:29.823181 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:32:29.823299 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:32:29.823425 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:32:29.823544 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 11 00:32:29.823672 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 11 00:32:29.823812 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:32:29.823933 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 11 00:32:29.823944 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:32:29.823952 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:32:29.823960 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:32:29.823968 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:32:29.823975 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:32:29.823987 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:32:29.823994 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:32:29.824002 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:32:29.824010 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:32:29.824018 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:32:29.824025 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:32:29.824033 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:32:29.824041 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:32:29.824048 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:32:29.824058 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:32:29.824081 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:32:29.824090 kernel: iommu: Default domain type: Translated Sep 11 00:32:29.824097 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:32:29.824105 kernel: efivars: Registered efivars operations Sep 11 00:32:29.824113 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:32:29.824121 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:32:29.824128 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 11 00:32:29.824136 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 11 00:32:29.824145 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 11 00:32:29.824158 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 11 00:32:29.824168 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 11 00:32:29.824178 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 11 00:32:29.824186 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 11 00:32:29.824194 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 11 00:32:29.824315 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:32:29.824432 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:32:29.824552 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:32:29.824562 kernel: vgaarb: loaded Sep 11 00:32:29.824570 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:32:29.824578 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:32:29.824587 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:32:29.824594 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:32:29.824602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:32:29.824610 kernel: pnp: PnP ACPI init Sep 11 00:32:29.824760 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 11 00:32:29.824777 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:32:29.824785 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:32:29.824794 kernel: NET: Registered PF_INET protocol family Sep 11 00:32:29.824802 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:32:29.824810 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:32:29.824818 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:32:29.824827 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:32:29.824835 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:32:29.824845 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:32:29.824853 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:32:29.824861 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:32:29.824869 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:32:29.824877 kernel: NET: Registered PF_XDP protocol family Sep 11 00:32:29.824996 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 11 00:32:29.825151 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 11 00:32:29.825263 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:32:29.825376 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:32:29.825482 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:32:29.825587 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 11 00:32:29.825704 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 11 00:32:29.825812 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:32:29.825822 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:32:29.825831 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:32:29.825839 kernel: Initialise system trusted keyrings Sep 11 00:32:29.825851 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:32:29.825859 kernel: Key type asymmetric registered Sep 11 00:32:29.825867 kernel: Asymmetric key parser 'x509' registered Sep 11 00:32:29.825875 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:32:29.825883 kernel: io scheduler mq-deadline registered Sep 11 00:32:29.825891 kernel: io scheduler kyber registered Sep 11 00:32:29.825901 kernel: io scheduler bfq registered Sep 11 00:32:29.825910 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:32:29.825918 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:32:29.825927 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:32:29.825935 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:32:29.825943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:32:29.825951 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:32:29.825960 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:32:29.825968 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:32:29.825976 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:32:29.826137 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:32:29.826269 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:32:29.826381 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:32:29 UTC (1757550749) Sep 11 00:32:29.826490 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:32:29.826501 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:32:29.826509 kernel: efifb: probing for efifb Sep 11 00:32:29.826517 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 11 00:32:29.826529 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 11 00:32:29.826537 kernel: efifb: scrolling: redraw Sep 11 00:32:29.826545 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:32:29.826553 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:32:29.826561 kernel: Console: switching to colour frame buffer device 160x50 Sep 11 00:32:29.826570 kernel: fb0: EFI VGA frame buffer device Sep 11 00:32:29.826578 kernel: pstore: Using crash dump compression: deflate Sep 11 00:32:29.826586 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:32:29.826594 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:32:29.826604 kernel: Segment Routing with IPv6 Sep 11 00:32:29.826612 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:32:29.826629 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:32:29.826638 kernel: Key type dns_resolver registered Sep 11 00:32:29.826646 kernel: IPI shorthand broadcast: enabled Sep 11 00:32:29.826654 kernel: sched_clock: Marking stable (2841002239, 192963127)->(3063695774, -29730408) Sep 11 00:32:29.826663 kernel: registered taskstats version 1 Sep 11 00:32:29.826671 kernel: Loading compiled-in X.509 certificates Sep 11 00:32:29.826679 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:32:29.826687 kernel: Demotion targets for Node 0: null Sep 11 00:32:29.826697 kernel: Key type .fscrypt registered Sep 11 00:32:29.826705 kernel: Key type fscrypt-provisioning registered Sep 11 00:32:29.826713 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:32:29.826721 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:32:29.826729 kernel: ima: No architecture policies found Sep 11 00:32:29.826738 kernel: clk: Disabling unused clocks Sep 11 00:32:29.826745 kernel: Warning: unable to open an initial console. Sep 11 00:32:29.826754 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:32:29.826765 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:32:29.826773 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:32:29.826781 kernel: Run /init as init process Sep 11 00:32:29.826789 kernel: with arguments: Sep 11 00:32:29.826797 kernel: /init Sep 11 00:32:29.826805 kernel: with environment: Sep 11 00:32:29.826812 kernel: HOME=/ Sep 11 00:32:29.826820 kernel: TERM=linux Sep 11 00:32:29.826828 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:32:29.826837 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:32:29.826869 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:32:29.826879 systemd[1]: Detected virtualization kvm. Sep 11 00:32:29.826887 systemd[1]: Detected architecture x86-64. Sep 11 00:32:29.826896 systemd[1]: Running in initrd. Sep 11 00:32:29.826904 systemd[1]: No hostname configured, using default hostname. Sep 11 00:32:29.826913 systemd[1]: Hostname set to . Sep 11 00:32:29.826924 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:32:29.826933 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:32:29.826941 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:32:29.826952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:32:29.826961 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:32:29.826970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:32:29.826978 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:32:29.826988 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:32:29.827000 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:32:29.827009 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:32:29.827019 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:32:29.827028 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:32:29.827038 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:32:29.827047 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:32:29.827055 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:32:29.827064 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:32:29.827089 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:32:29.827097 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:32:29.827106 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:32:29.827114 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:32:29.827123 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:32:29.827131 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:32:29.827140 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:32:29.827149 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:32:29.827160 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:32:29.827169 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:32:29.827177 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:32:29.827187 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:32:29.827195 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:32:29.827204 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:32:29.827212 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:32:29.827221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:29.827231 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:32:29.827240 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:32:29.827249 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:32:29.827258 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:32:29.827287 systemd-journald[220]: Collecting audit messages is disabled. Sep 11 00:32:29.827309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:32:29.827317 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:32:29.827326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:29.827335 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:32:29.827346 systemd-journald[220]: Journal started Sep 11 00:32:29.827365 systemd-journald[220]: Runtime Journal (/run/log/journal/326e01970db549d5bb7ec15ffe5ebe68) is 6M, max 48.5M, 42.4M free. Sep 11 00:32:29.812529 systemd-modules-load[221]: Inserted module 'overlay' Sep 11 00:32:29.831113 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:32:29.836172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:32:29.842924 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:32:29.848179 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:32:29.849733 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 11 00:32:29.850651 kernel: Bridge firewalling registered Sep 11 00:32:29.850018 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:32:29.850918 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:32:29.853667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:32:29.855878 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:32:29.859824 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:32:29.861818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:32:29.881337 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:32:29.882994 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:32:29.891642 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:32:29.931459 systemd-resolved[269]: Positive Trust Anchors: Sep 11 00:32:29.931474 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:32:29.931505 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:32:29.933981 systemd-resolved[269]: Defaulting to hostname 'linux'. Sep 11 00:32:29.934989 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:32:29.940839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:32:30.009106 kernel: SCSI subsystem initialized Sep 11 00:32:30.020095 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:32:30.031099 kernel: iscsi: registered transport (tcp) Sep 11 00:32:30.052346 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:32:30.052381 kernel: QLogic iSCSI HBA Driver Sep 11 00:32:30.073253 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:32:30.090472 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:32:30.094106 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:32:30.148802 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:32:30.152330 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:32:30.214101 kernel: raid6: avx2x4 gen() 30298 MB/s Sep 11 00:32:30.231100 kernel: raid6: avx2x2 gen() 30722 MB/s Sep 11 00:32:30.248137 kernel: raid6: avx2x1 gen() 25727 MB/s Sep 11 00:32:30.248156 kernel: raid6: using algorithm avx2x2 gen() 30722 MB/s Sep 11 00:32:30.266154 kernel: raid6: .... xor() 19765 MB/s, rmw enabled Sep 11 00:32:30.266176 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:32:30.286106 kernel: xor: automatically using best checksumming function avx Sep 11 00:32:30.452102 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:32:30.460255 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:32:30.463882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:32:30.495507 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 11 00:32:30.500748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:32:30.504716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:32:30.536617 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Sep 11 00:32:30.566024 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:32:30.568770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:32:30.638670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:32:30.641364 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:32:30.676102 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:32:30.677230 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:32:30.682561 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:32:30.682591 kernel: GPT:9289727 != 19775487 Sep 11 00:32:30.682616 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:32:30.682628 kernel: GPT:9289727 != 19775487 Sep 11 00:32:30.683511 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:32:30.683535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:32:30.694100 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:32:30.706116 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:32:30.710112 kernel: libata version 3.00 loaded. Sep 11 00:32:30.715099 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:32:30.716891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:32:30.721941 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:32:30.721972 kernel: AES CTR mode by8 optimization enabled Sep 11 00:32:30.721994 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:32:30.722351 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:32:30.723753 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:32:30.717193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:30.721640 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:30.728206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:30.749262 kernel: scsi host0: ahci Sep 11 00:32:30.760116 kernel: scsi host1: ahci Sep 11 00:32:30.762096 kernel: scsi host2: ahci Sep 11 00:32:30.762261 kernel: scsi host3: ahci Sep 11 00:32:30.763117 kernel: scsi host4: ahci Sep 11 00:32:30.764158 kernel: scsi host5: ahci Sep 11 00:32:30.766499 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Sep 11 00:32:30.766521 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Sep 11 00:32:30.766537 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Sep 11 00:32:30.768669 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Sep 11 00:32:30.768686 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Sep 11 00:32:30.770777 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Sep 11 00:32:30.782312 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:32:30.792284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:32:30.795395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:32:30.806612 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:32:30.817140 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:32:30.819756 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:32:30.819828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:32:30.819880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:30.825167 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:30.827885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:30.828159 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:32:30.846481 disk-uuid[636]: Primary Header is updated. Sep 11 00:32:30.846481 disk-uuid[636]: Secondary Entries is updated. Sep 11 00:32:30.846481 disk-uuid[636]: Secondary Header is updated. Sep 11 00:32:30.852109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:32:30.852989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:30.857100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:32:31.081975 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:32:31.082030 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:32:31.082042 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:32:31.082052 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:32:31.083103 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:32:31.084091 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:32:31.085323 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:32:31.085335 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:32:31.085346 kernel: ata3.00: applying bridge limits Sep 11 00:32:31.086421 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:32:31.086433 kernel: ata3.00: configured for UDMA/100 Sep 11 00:32:31.088095 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:32:31.136570 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:32:31.136783 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:32:31.151099 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:32:31.476369 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:32:31.478005 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:32:31.479644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:32:31.480810 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:32:31.483735 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:32:31.523112 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:32:31.858110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:32:31.858177 disk-uuid[639]: The operation has completed successfully. Sep 11 00:32:31.884366 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:32:31.884486 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:32:31.924906 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:32:31.953121 sh[670]: Success Sep 11 00:32:31.973835 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:32:31.973880 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:32:31.973895 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:32:31.983090 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:32:32.014383 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:32:32.017326 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:32:32.040227 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:32:32.049098 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (683) Sep 11 00:32:32.049127 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:32:32.051093 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:32.055683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:32:32.055700 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:32:32.057039 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:32:32.059331 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:32:32.059506 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:32:32.060335 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:32:32.061120 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:32:32.086664 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Sep 11 00:32:32.086701 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:32.086719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:32.090097 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:32:32.090124 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:32:32.095135 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:32.095360 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:32:32.097708 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:32:32.178968 ignition[759]: Ignition 2.21.0 Sep 11 00:32:32.178980 ignition[759]: Stage: fetch-offline Sep 11 00:32:32.179013 ignition[759]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:32.179022 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:32:32.179119 ignition[759]: parsed url from cmdline: "" Sep 11 00:32:32.179122 ignition[759]: no config URL provided Sep 11 00:32:32.179128 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:32:32.179136 ignition[759]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:32:32.179159 ignition[759]: op(1): [started] loading QEMU firmware config module Sep 11 00:32:32.179164 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:32:32.189285 ignition[759]: op(1): [finished] loading QEMU firmware config module Sep 11 00:32:32.199295 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:32:32.202329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:32:32.232184 ignition[759]: parsing config with SHA512: 12ce6d6794b1419dfb9c5f6b10a6fbe7f5e2984792dc3b8004c56d5baa29061944a41cde094c0064eb67adf71e6947189aa1126daa8d09841a518f5c663aa7ab Sep 11 00:32:32.235517 unknown[759]: fetched base config from "system" Sep 11 00:32:32.235529 unknown[759]: fetched user config from "qemu" Sep 11 00:32:32.235978 ignition[759]: fetch-offline: fetch-offline passed Sep 11 00:32:32.236035 ignition[759]: Ignition finished successfully Sep 11 00:32:32.238388 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:32:32.240822 systemd-networkd[860]: lo: Link UP Sep 11 00:32:32.240825 systemd-networkd[860]: lo: Gained carrier Sep 11 00:32:32.242293 systemd-networkd[860]: Enumeration completed Sep 11 00:32:32.242396 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:32:32.242643 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:32:32.242648 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:32:32.243822 systemd-networkd[860]: eth0: Link UP Sep 11 00:32:32.243955 systemd-networkd[860]: eth0: Gained carrier Sep 11 00:32:32.243964 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:32:32.244427 systemd[1]: Reached target network.target - Network. Sep 11 00:32:32.245291 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:32:32.246115 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:32:32.268133 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:32:32.277390 ignition[864]: Ignition 2.21.0 Sep 11 00:32:32.277401 ignition[864]: Stage: kargs Sep 11 00:32:32.277595 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:32.277606 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:32:32.278865 ignition[864]: kargs: kargs passed Sep 11 00:32:32.278933 ignition[864]: Ignition finished successfully Sep 11 00:32:32.282955 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:32:32.285943 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:32:32.320942 ignition[873]: Ignition 2.21.0 Sep 11 00:32:32.320956 ignition[873]: Stage: disks Sep 11 00:32:32.321112 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:32.321123 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:32:32.322563 ignition[873]: disks: disks passed Sep 11 00:32:32.325600 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:32:32.322613 ignition[873]: Ignition finished successfully Sep 11 00:32:32.326258 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:32:32.327851 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:32:32.328326 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:32:32.328653 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:32:32.328963 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:32:32.330353 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:32:32.359293 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:32:32.367952 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:32:32.370577 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:32:32.477100 kernel: EXT4-fs (vda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:32:32.477346 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:32:32.477962 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:32:32.481477 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:32:32.483040 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:32:32.484215 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:32:32.484255 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:32:32.484276 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:32:32.502287 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:32:32.503970 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:32:32.508793 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Sep 11 00:32:32.508822 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:32.508837 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:32.512539 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:32:32.512577 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:32:32.514256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:32:32.541257 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:32:32.546519 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:32:32.550889 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:32:32.554613 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:32:32.641348 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:32:32.644158 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:32:32.645009 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:32:32.670101 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:32.682189 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:32:32.695584 ignition[1007]: INFO : Ignition 2.21.0 Sep 11 00:32:32.695584 ignition[1007]: INFO : Stage: mount Sep 11 00:32:32.697218 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:32.697218 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:32:32.699316 ignition[1007]: INFO : mount: mount passed Sep 11 00:32:32.699316 ignition[1007]: INFO : Ignition finished successfully Sep 11 00:32:32.703192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:32:32.706295 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:32:33.048908 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:32:33.051416 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:32:33.073957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Sep 11 00:32:33.074017 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:32:33.074043 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:32:33.078106 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:32:33.078166 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:32:33.080169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:32:33.117578 ignition[1036]: INFO : Ignition 2.21.0 Sep 11 00:32:33.117578 ignition[1036]: INFO : Stage: files Sep 11 00:32:33.119416 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:33.119416 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:32:33.122442 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:32:33.123771 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:32:33.123771 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:32:33.128578 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:32:33.129971 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:32:33.131533 unknown[1036]: wrote ssh authorized keys file for user: core Sep 11 00:32:33.132761 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:32:33.134265 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:32:33.134265 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 11 00:32:33.200329 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:32:33.408281 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 11 00:32:33.408281 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:32:33.412031 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:32:33.412031 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:32:33.412031 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:32:33.412031 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:32:33.412031 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:32:33.412031 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:32:33.412031 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:32:33.424723 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:32:33.424723 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:32:33.424723 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:32:33.430389 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:32:33.430389 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:32:33.430389 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 11 00:32:33.666278 systemd-networkd[860]: eth0: Gained IPv6LL Sep 11 00:32:33.758994 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:32:34.158478 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 11 00:32:34.158478 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:32:34.163134 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:32:34.168230 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:32:34.168230 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:32:34.168230 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 00:32:34.173433 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:32:34.173433 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:32:34.173433 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 00:32:34.173433 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:32:34.197173 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:32:34.201214 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:32:34.203334 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:32:34.203334 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:32:34.206749 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:32:34.208373 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:32:34.208373 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:32:34.208373 ignition[1036]: INFO : files: files passed Sep 11 00:32:34.208373 ignition[1036]: INFO : Ignition finished successfully Sep 11 00:32:34.215344 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:32:34.218534 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:32:34.222474 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:32:34.242004 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:32:34.242261 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:32:34.246289 initrd-setup-root-after-ignition[1066]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:32:34.249827 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:32:34.251839 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:32:34.253691 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:32:34.256287 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:32:34.260013 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:32:34.263570 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:32:34.354882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:32:34.355063 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:32:34.358342 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:32:34.360541 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:32:34.360687 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:32:34.361860 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:32:34.391660 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:32:34.396950 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:32:34.417567 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:32:34.420418 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:32:34.423267 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:32:34.423439 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:32:34.423595 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:32:34.429549 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:32:34.430915 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:32:34.432117 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:32:34.432696 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:32:34.433095 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:32:34.439457 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:32:34.442417 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:32:34.442614 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:32:34.442963 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:32:34.443431 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:32:34.443745 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:32:34.444031 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:32:34.444219 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:32:34.444871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:32:34.445233 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:32:34.457925 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:32:34.459112 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:32:34.459279 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:32:34.459476 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:32:34.462436 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:32:34.462626 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:32:34.465150 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:32:34.467256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:32:34.473221 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:32:34.473515 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:32:34.476951 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:32:34.478910 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:32:34.479050 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:32:34.480897 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:32:34.481019 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:32:34.482906 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:32:34.483097 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:32:34.485206 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:32:34.485356 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:32:34.488401 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:32:34.489435 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:32:34.489601 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:32:34.493810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:32:34.495099 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:32:34.495439 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:32:34.503189 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:32:34.503328 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:32:34.510285 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:32:34.510401 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:32:34.526136 ignition[1092]: INFO : Ignition 2.21.0 Sep 11 00:32:34.526136 ignition[1092]: INFO : Stage: umount Sep 11 00:32:34.528157 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:32:34.528157 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:32:34.528157 ignition[1092]: INFO : umount: umount passed Sep 11 00:32:34.528157 ignition[1092]: INFO : Ignition finished successfully Sep 11 00:32:34.531989 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:32:34.534704 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:32:34.534831 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:32:34.536108 systemd[1]: Stopped target network.target - Network. Sep 11 00:32:34.538888 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:32:34.538949 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:32:34.540961 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:32:34.541009 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:32:34.543139 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:32:34.543195 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:32:34.545660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:32:34.545711 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:32:34.546882 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:32:34.549661 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:32:34.555581 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:32:34.555760 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:32:34.560162 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:32:34.560475 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:32:34.560541 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:32:34.564409 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:32:34.568139 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:32:34.568287 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:32:34.571667 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:32:34.571900 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:32:34.575245 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:32:34.575301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:32:34.578924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:32:34.579005 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:32:34.579065 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:32:34.582140 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:32:34.582211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:32:34.587555 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:32:34.587631 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:32:34.590776 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:32:34.593351 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:32:34.611371 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:32:34.611562 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:32:34.613065 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:32:34.613320 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:32:34.618643 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:32:34.618718 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:32:34.620789 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:32:34.620834 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:32:34.622927 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:32:34.622986 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:32:34.624261 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:32:34.624316 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:32:34.627809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:32:34.627883 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:32:34.632118 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:32:34.634017 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:32:34.634097 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:32:34.639798 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:32:34.640891 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:32:34.643391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:32:34.643470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:34.662765 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:32:34.662903 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:32:34.745371 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:32:34.745567 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:32:34.747210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:32:34.750165 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:32:34.750267 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:32:34.754205 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:32:34.785237 systemd[1]: Switching root. Sep 11 00:32:34.831372 systemd-journald[220]: Journal stopped Sep 11 00:32:36.045410 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 11 00:32:36.045478 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:32:36.045495 kernel: SELinux: policy capability open_perms=1 Sep 11 00:32:36.045511 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:32:36.045522 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:32:36.045533 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:32:36.045544 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:32:36.045556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:32:36.045567 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:32:36.045578 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:32:36.045593 kernel: audit: type=1403 audit(1757550755.297:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:32:36.045607 systemd[1]: Successfully loaded SELinux policy in 51.558ms. Sep 11 00:32:36.045629 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.228ms. Sep 11 00:32:36.045642 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:32:36.045655 systemd[1]: Detected virtualization kvm. Sep 11 00:32:36.045666 systemd[1]: Detected architecture x86-64. Sep 11 00:32:36.045682 systemd[1]: Detected first boot. Sep 11 00:32:36.045694 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:32:36.045706 zram_generator::config[1138]: No configuration found. Sep 11 00:32:36.045723 kernel: Guest personality initialized and is inactive Sep 11 00:32:36.045734 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:32:36.045745 kernel: Initialized host personality Sep 11 00:32:36.045756 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:32:36.045768 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:32:36.045780 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:32:36.045792 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:32:36.045804 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:32:36.045816 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:32:36.045830 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:32:36.045842 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:32:36.045854 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:32:36.045866 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:32:36.045878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:32:36.045890 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:32:36.045902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:32:36.045914 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:32:36.045925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:32:36.045939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:32:36.045951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:32:36.045962 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:32:36.045980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:32:36.045994 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:32:36.046006 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:32:36.046018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:32:36.046032 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:32:36.046044 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:32:36.046056 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:32:36.046093 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:32:36.046115 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:32:36.046131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:32:36.046144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:32:36.046156 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:32:36.046169 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:32:36.046180 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:32:36.046196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:32:36.046208 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:32:36.046220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:32:36.046232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:32:36.046244 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:32:36.046255 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:32:36.046267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:32:36.046281 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:32:36.046292 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:32:36.046307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:36.046318 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:32:36.046330 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:32:36.046342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:32:36.046355 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:32:36.046367 systemd[1]: Reached target machines.target - Containers. Sep 11 00:32:36.046379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:32:36.046391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:32:36.046405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:32:36.046417 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:32:36.046428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:32:36.046449 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:32:36.046461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:32:36.046474 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:32:36.046486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:32:36.046503 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:32:36.046521 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:32:36.046534 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:32:36.046545 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:32:36.046558 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:32:36.046571 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:32:36.046583 kernel: fuse: init (API version 7.41) Sep 11 00:32:36.046595 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:32:36.046607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:32:36.046620 kernel: loop: module loaded Sep 11 00:32:36.046632 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:32:36.046643 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:32:36.046655 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:32:36.046666 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:32:36.046678 kernel: ACPI: bus type drm_connector registered Sep 11 00:32:36.046691 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:32:36.046702 systemd[1]: Stopped verity-setup.service. Sep 11 00:32:36.046714 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:36.046748 systemd-journald[1216]: Collecting audit messages is disabled. Sep 11 00:32:36.046773 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:32:36.046785 systemd-journald[1216]: Journal started Sep 11 00:32:36.046810 systemd-journald[1216]: Runtime Journal (/run/log/journal/326e01970db549d5bb7ec15ffe5ebe68) is 6M, max 48.5M, 42.4M free. Sep 11 00:32:36.054196 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:32:36.054259 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:32:36.054281 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:32:36.054297 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:32:35.822347 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:32:35.842915 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:32:35.843395 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:32:36.057122 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:32:36.058721 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:32:36.059990 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:32:36.061459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:32:36.062952 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:32:36.063183 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:32:36.064630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:32:36.064841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:32:36.066241 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:32:36.066452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:32:36.067766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:32:36.067967 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:32:36.069435 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:32:36.069648 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:32:36.070973 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:32:36.071189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:32:36.072562 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:32:36.073961 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:32:36.075492 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:32:36.077023 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:32:36.091796 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:32:36.094340 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:32:36.096424 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:32:36.097533 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:32:36.097563 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:32:36.099528 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:32:36.111341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:32:36.113250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:32:36.115259 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:32:36.120466 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:32:36.121756 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:32:36.123056 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:32:36.124285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:32:36.126242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:32:36.128354 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:32:36.133600 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:32:36.136427 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:32:36.137712 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:32:36.143188 systemd-journald[1216]: Time spent on flushing to /var/log/journal/326e01970db549d5bb7ec15ffe5ebe68 is 22.373ms for 1071 entries. Sep 11 00:32:36.143188 systemd-journald[1216]: System Journal (/var/log/journal/326e01970db549d5bb7ec15ffe5ebe68) is 8M, max 195.6M, 187.6M free. Sep 11 00:32:36.186648 systemd-journald[1216]: Received client request to flush runtime journal. Sep 11 00:32:36.186705 kernel: loop0: detected capacity change from 0 to 146240 Sep 11 00:32:36.186729 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:32:36.148652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:32:36.150558 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:32:36.152780 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:32:36.159228 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:32:36.171307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:32:36.188680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:32:36.197332 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:32:36.200948 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:32:36.206669 kernel: loop1: detected capacity change from 0 to 221472 Sep 11 00:32:36.205118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:32:36.236007 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 11 00:32:36.236026 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 11 00:32:36.242025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:32:36.242119 kernel: loop2: detected capacity change from 0 to 113872 Sep 11 00:32:36.270099 kernel: loop3: detected capacity change from 0 to 146240 Sep 11 00:32:36.281105 kernel: loop4: detected capacity change from 0 to 221472 Sep 11 00:32:36.290095 kernel: loop5: detected capacity change from 0 to 113872 Sep 11 00:32:36.299309 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:32:36.300256 (sd-merge)[1280]: Merged extensions into '/usr'. Sep 11 00:32:36.305194 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:32:36.305210 systemd[1]: Reloading... Sep 11 00:32:36.380173 zram_generator::config[1309]: No configuration found. Sep 11 00:32:36.456296 ldconfig[1252]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:32:36.475334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:32:36.555773 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:32:36.555848 systemd[1]: Reloading finished in 250 ms. Sep 11 00:32:36.584557 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:32:36.586227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:32:36.609691 systemd[1]: Starting ensure-sysext.service... Sep 11 00:32:36.611771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:32:36.626436 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:32:36.626450 systemd[1]: Reloading... Sep 11 00:32:36.634283 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:32:36.634333 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:32:36.634721 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:32:36.635042 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:32:36.636232 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:32:36.636603 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 11 00:32:36.636698 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 11 00:32:36.641696 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:32:36.641779 systemd-tmpfiles[1344]: Skipping /boot Sep 11 00:32:36.654499 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:32:36.654512 systemd-tmpfiles[1344]: Skipping /boot Sep 11 00:32:36.680105 zram_generator::config[1371]: No configuration found. Sep 11 00:32:36.769892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:32:36.854786 systemd[1]: Reloading finished in 227 ms. Sep 11 00:32:36.878823 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:32:36.899173 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:32:36.909229 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:32:36.912169 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:32:36.924395 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:32:36.929475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:32:36.933512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:32:36.936549 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:32:36.940400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:36.940582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:32:36.942248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:32:36.946976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:32:36.951532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:32:36.952764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:32:36.952878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:32:36.952974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:36.954696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:32:36.956011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:32:36.958322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:32:36.958606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:32:36.961973 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:32:36.963767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:32:36.975469 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Sep 11 00:32:36.975871 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:32:36.979942 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:32:36.985308 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:36.985669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:32:36.987589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:32:36.992388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:32:36.995835 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:32:36.998328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:32:36.998457 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:32:36.999399 augenrules[1446]: No rules Sep 11 00:32:37.004766 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:32:37.008440 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:32:37.009576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:37.011419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:32:37.013346 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:32:37.014534 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:32:37.016132 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:32:37.017927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:32:37.018174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:32:37.019990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:32:37.020238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:32:37.022060 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:32:37.022306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:32:37.024034 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:32:37.039247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:37.042199 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:32:37.043607 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:32:37.049264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:32:37.059285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:32:37.063315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:32:37.072265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:32:37.072516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:32:37.072555 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:32:37.078427 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:32:37.080202 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:32:37.084011 augenrules[1481]: /sbin/augenrules: No change Sep 11 00:32:37.080241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:32:37.081423 systemd[1]: Finished ensure-sysext.service. Sep 11 00:32:37.083556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:32:37.083793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:32:37.089852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:32:37.090105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:32:37.091774 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:32:37.092141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:32:37.094982 augenrules[1508]: No rules Sep 11 00:32:37.094695 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:32:37.094911 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:32:37.096503 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:32:37.096757 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:32:37.116211 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:32:37.118239 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:32:37.118311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:32:37.120806 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:32:37.178987 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:32:37.216107 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:32:37.228987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:32:37.232006 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:32:37.237769 systemd-networkd[1499]: lo: Link UP Sep 11 00:32:37.237777 systemd-networkd[1499]: lo: Gained carrier Sep 11 00:32:37.239376 systemd-networkd[1499]: Enumeration completed Sep 11 00:32:37.239468 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:32:37.241695 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:32:37.241705 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:32:37.242254 systemd-networkd[1499]: eth0: Link UP Sep 11 00:32:37.242431 systemd-networkd[1499]: eth0: Gained carrier Sep 11 00:32:37.242450 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:32:37.243521 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:32:37.247386 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:32:37.250635 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:32:37.251928 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:32:37.258130 systemd-networkd[1499]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:32:37.259187 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Sep 11 00:32:38.223545 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:32:38.223592 systemd-timesyncd[1523]: Initial clock synchronization to Thu 2025-09-11 00:32:38.223468 UTC. Sep 11 00:32:38.232983 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:32:38.238344 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:32:38.240892 systemd-resolved[1413]: Positive Trust Anchors: Sep 11 00:32:38.240912 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:32:38.240946 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:32:38.241224 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:32:38.246875 systemd-resolved[1413]: Defaulting to hostname 'linux'. Sep 11 00:32:38.249152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:32:38.250429 systemd[1]: Reached target network.target - Network. Sep 11 00:32:38.251404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:32:38.253474 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:32:38.256538 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:32:38.257832 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:32:38.259085 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:32:38.260450 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:32:38.263577 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:32:38.264868 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:32:38.266388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:32:38.266426 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:32:38.267361 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:32:38.269287 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:32:38.270347 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:32:38.273170 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:32:38.279489 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 11 00:32:38.279755 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:32:38.281857 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:32:38.281765 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:32:38.283457 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:32:38.284704 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:32:38.296407 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:32:38.298040 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:32:38.299964 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:32:38.307706 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:32:38.308746 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:32:38.309752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:32:38.309781 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:32:38.313154 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:32:38.316559 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:32:38.319221 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:32:38.334137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:32:38.340665 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:32:38.341786 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:32:38.344340 jq[1563]: false Sep 11 00:32:38.345764 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:32:38.353692 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:32:38.361575 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Sep 11 00:32:38.361878 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Sep 11 00:32:38.364822 extend-filesystems[1564]: Found /dev/vda6 Sep 11 00:32:38.375707 extend-filesystems[1564]: Found /dev/vda9 Sep 11 00:32:38.374099 oslogin_cache_refresh[1565]: Failure getting users, quitting Sep 11 00:32:38.376504 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Sep 11 00:32:38.376504 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:32:38.376504 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Sep 11 00:32:38.369069 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:32:38.374118 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:32:38.374177 oslogin_cache_refresh[1565]: Refreshing group entry cache Sep 11 00:32:38.377028 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:32:38.379255 extend-filesystems[1564]: Checking size of /dev/vda9 Sep 11 00:32:38.379598 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:32:38.382412 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Sep 11 00:32:38.382412 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:32:38.381306 oslogin_cache_refresh[1565]: Failure getting groups, quitting Sep 11 00:32:38.381331 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:32:38.391772 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:32:38.393965 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:32:38.395013 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:32:38.395934 extend-filesystems[1564]: Resized partition /dev/vda9 Sep 11 00:32:38.396952 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:32:38.400470 extend-filesystems[1588]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:32:38.400521 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:32:38.407361 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:32:38.412626 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:32:38.414943 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:32:38.415176 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:32:38.415526 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:32:38.415762 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:32:38.417008 jq[1589]: true Sep 11 00:32:38.419077 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:32:38.419326 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:32:38.421281 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:32:38.421529 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:32:38.431814 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:32:38.435489 jq[1594]: true Sep 11 00:32:38.437333 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:32:38.461921 update_engine[1587]: I20250911 00:32:38.461841 1587 main.cc:92] Flatcar Update Engine starting Sep 11 00:32:38.463560 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:32:38.463560 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:32:38.463560 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:32:38.467086 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Sep 11 00:32:38.469838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:38.471978 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:32:38.472493 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:32:38.476381 tar[1593]: linux-amd64/helm Sep 11 00:32:38.486863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:32:38.487120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:38.490665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:32:38.506976 dbus-daemon[1561]: [system] SELinux support is enabled Sep 11 00:32:38.507133 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:32:38.510490 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:32:38.511975 update_engine[1587]: I20250911 00:32:38.511723 1587 update_check_scheduler.cc:74] Next update check in 6m20s Sep 11 00:32:38.510514 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:32:38.511939 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:32:38.511956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:32:38.513383 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:32:38.517881 bash[1627]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:32:38.533183 kernel: kvm_amd: TSC scaling supported Sep 11 00:32:38.533656 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:32:38.533690 kernel: kvm_amd: Nested Paging enabled Sep 11 00:32:38.533703 kernel: kvm_amd: LBR virtualization supported Sep 11 00:32:38.541526 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:32:38.541556 kernel: kvm_amd: Virtual GIF supported Sep 11 00:32:38.541759 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:32:38.543633 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:32:38.571544 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:32:38.607365 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:32:38.619728 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:32:38.620065 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:32:38.621059 systemd-logind[1583]: New seat seat0. Sep 11 00:32:38.621832 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:32:38.626609 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:32:38.642558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:32:38.666418 containerd[1595]: time="2025-09-11T00:32:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:32:38.667138 containerd[1595]: time="2025-09-11T00:32:38.666977139Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:32:38.676033 containerd[1595]: time="2025-09-11T00:32:38.675981636Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.421µs" Sep 11 00:32:38.676033 containerd[1595]: time="2025-09-11T00:32:38.676028173Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:32:38.676101 containerd[1595]: time="2025-09-11T00:32:38.676047850Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:32:38.676331 containerd[1595]: time="2025-09-11T00:32:38.676245601Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:32:38.676331 containerd[1595]: time="2025-09-11T00:32:38.676266159Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:32:38.676331 containerd[1595]: time="2025-09-11T00:32:38.676292138Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:32:38.676402 containerd[1595]: time="2025-09-11T00:32:38.676385834Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:32:38.676402 containerd[1595]: time="2025-09-11T00:32:38.676398357Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:32:38.676773 containerd[1595]: time="2025-09-11T00:32:38.676695915Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:32:38.676773 containerd[1595]: time="2025-09-11T00:32:38.676716193Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:32:38.676773 containerd[1595]: time="2025-09-11T00:32:38.676727033Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:32:38.676773 containerd[1595]: time="2025-09-11T00:32:38.676735499Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:32:38.676859 containerd[1595]: time="2025-09-11T00:32:38.676828634Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:32:38.677084 containerd[1595]: time="2025-09-11T00:32:38.677059637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:32:38.677112 containerd[1595]: time="2025-09-11T00:32:38.677096937Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:32:38.677112 containerd[1595]: time="2025-09-11T00:32:38.677108058Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:32:38.677155 containerd[1595]: time="2025-09-11T00:32:38.677139988Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:32:38.677451 containerd[1595]: time="2025-09-11T00:32:38.677427878Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:32:38.677518 containerd[1595]: time="2025-09-11T00:32:38.677497789Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:32:38.683041 containerd[1595]: time="2025-09-11T00:32:38.682995198Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:32:38.683078 containerd[1595]: time="2025-09-11T00:32:38.683043479Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:32:38.683078 containerd[1595]: time="2025-09-11T00:32:38.683073696Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:32:38.683115 containerd[1595]: time="2025-09-11T00:32:38.683085077Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:32:38.683115 containerd[1595]: time="2025-09-11T00:32:38.683098172Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:32:38.683115 containerd[1595]: time="2025-09-11T00:32:38.683109172Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:32:38.683167 containerd[1595]: time="2025-09-11T00:32:38.683124371Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:32:38.683167 containerd[1595]: time="2025-09-11T00:32:38.683153085Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:32:38.683167 containerd[1595]: time="2025-09-11T00:32:38.683164195Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:32:38.683228 containerd[1595]: time="2025-09-11T00:32:38.683174314Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:32:38.683228 containerd[1595]: time="2025-09-11T00:32:38.683184984Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:32:38.683228 containerd[1595]: time="2025-09-11T00:32:38.683203770Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683358870Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683382725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683398184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683409065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683419915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683430775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683441375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683450843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683461914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683471892Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:32:38.683534 containerd[1595]: time="2025-09-11T00:32:38.683482142Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:32:38.683751 containerd[1595]: time="2025-09-11T00:32:38.683554938Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:32:38.683751 containerd[1595]: time="2025-09-11T00:32:38.683568233Z" level=info msg="Start snapshots syncer" Sep 11 00:32:38.683751 containerd[1595]: time="2025-09-11T00:32:38.683597408Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:32:38.684162 containerd[1595]: time="2025-09-11T00:32:38.683846475Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:32:38.684162 containerd[1595]: time="2025-09-11T00:32:38.683900326Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.683990064Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684103868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684122623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684132141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684143231Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684154052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684171745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684185050Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684211159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684221919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684231978Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684266803Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684280960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:32:38.684288 containerd[1595]: time="2025-09-11T00:32:38.684289706Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684300376Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684308892Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684332857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684356411Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684374655Z" level=info msg="runtime interface created" Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684380026Z" level=info msg="created NRI interface" Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684389293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684399732Z" level=info msg="Connect containerd service" Sep 11 00:32:38.684549 containerd[1595]: time="2025-09-11T00:32:38.684431783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:32:38.685503 containerd[1595]: time="2025-09-11T00:32:38.685385211Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:32:38.718964 sshd_keygen[1607]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:32:38.749402 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:32:38.752808 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:32:38.770088 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:32:38.770412 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:32:38.773689 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:32:38.781131 containerd[1595]: time="2025-09-11T00:32:38.781079932Z" level=info msg="Start subscribing containerd event" Sep 11 00:32:38.781207 containerd[1595]: time="2025-09-11T00:32:38.781147329Z" level=info msg="Start recovering state" Sep 11 00:32:38.781281 containerd[1595]: time="2025-09-11T00:32:38.781261142Z" level=info msg="Start event monitor" Sep 11 00:32:38.781308 containerd[1595]: time="2025-09-11T00:32:38.781283634Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:32:38.781308 containerd[1595]: time="2025-09-11T00:32:38.781292581Z" level=info msg="Start streaming server" Sep 11 00:32:38.781308 containerd[1595]: time="2025-09-11T00:32:38.781303452Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:32:38.781377 containerd[1595]: time="2025-09-11T00:32:38.781330332Z" level=info msg="runtime interface starting up..." Sep 11 00:32:38.781377 containerd[1595]: time="2025-09-11T00:32:38.781337736Z" level=info msg="starting plugins..." Sep 11 00:32:38.781377 containerd[1595]: time="2025-09-11T00:32:38.781356251Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:32:38.781604 containerd[1595]: time="2025-09-11T00:32:38.781561656Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:32:38.781733 containerd[1595]: time="2025-09-11T00:32:38.781696929Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:32:38.781837 containerd[1595]: time="2025-09-11T00:32:38.781768293Z" level=info msg="containerd successfully booted in 0.115834s" Sep 11 00:32:38.781859 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:32:38.795879 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:32:38.799112 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:32:38.801268 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:32:38.802727 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:32:38.929712 tar[1593]: linux-amd64/LICENSE Sep 11 00:32:38.929811 tar[1593]: linux-amd64/README.md Sep 11 00:32:38.955536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:32:39.876475 systemd-networkd[1499]: eth0: Gained IPv6LL Sep 11 00:32:39.879240 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:32:39.880970 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:32:39.883445 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:32:39.885791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:39.887937 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:32:39.921538 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:32:39.924529 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:32:39.924803 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:32:39.926307 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:32:40.576848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:40.578391 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:32:40.579606 systemd[1]: Startup finished in 2.896s (kernel) + 5.669s (initrd) + 4.370s (userspace) = 12.937s. Sep 11 00:32:40.583368 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:32:40.981127 kubelet[1706]: E0911 00:32:40.980995 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:32:40.984949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:32:40.985145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:32:40.985498 systemd[1]: kubelet.service: Consumed 947ms CPU time, 265M memory peak. Sep 11 00:32:44.251135 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:32:44.252481 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:33240.service - OpenSSH per-connection server daemon (10.0.0.1:33240). Sep 11 00:32:44.478383 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 33240 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:32:44.480252 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.486402 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:32:44.487481 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:32:44.493452 systemd-logind[1583]: New session 1 of user core. Sep 11 00:32:44.510532 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:32:44.513380 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:32:44.537213 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:32:44.539548 systemd-logind[1583]: New session c1 of user core. Sep 11 00:32:44.684278 systemd[1723]: Queued start job for default target default.target. Sep 11 00:32:44.706517 systemd[1723]: Created slice app.slice - User Application Slice. Sep 11 00:32:44.706542 systemd[1723]: Reached target paths.target - Paths. Sep 11 00:32:44.706578 systemd[1723]: Reached target timers.target - Timers. Sep 11 00:32:44.707953 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:32:44.717814 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:32:44.717878 systemd[1723]: Reached target sockets.target - Sockets. Sep 11 00:32:44.717912 systemd[1723]: Reached target basic.target - Basic System. Sep 11 00:32:44.717950 systemd[1723]: Reached target default.target - Main User Target. Sep 11 00:32:44.717984 systemd[1723]: Startup finished in 171ms. Sep 11 00:32:44.718425 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:32:44.719967 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:32:44.784529 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:33254.service - OpenSSH per-connection server daemon (10.0.0.1:33254). Sep 11 00:32:44.832861 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 33254 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:32:44.834177 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.838483 systemd-logind[1583]: New session 2 of user core. Sep 11 00:32:44.848434 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:32:44.899975 sshd[1737]: Connection closed by 10.0.0.1 port 33254 Sep 11 00:32:44.900267 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:44.913821 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:33254.service: Deactivated successfully. Sep 11 00:32:44.915579 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:32:44.916274 systemd-logind[1583]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:32:44.919072 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:33270.service - OpenSSH per-connection server daemon (10.0.0.1:33270). Sep 11 00:32:44.919726 systemd-logind[1583]: Removed session 2. Sep 11 00:32:44.985093 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 33270 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:32:44.986417 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.990422 systemd-logind[1583]: New session 3 of user core. Sep 11 00:32:45.000463 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:32:45.048565 sshd[1745]: Connection closed by 10.0.0.1 port 33270 Sep 11 00:32:45.049040 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:45.056850 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:33270.service: Deactivated successfully. Sep 11 00:32:45.058623 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:32:45.059378 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:32:45.062162 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:33272.service - OpenSSH per-connection server daemon (10.0.0.1:33272). Sep 11 00:32:45.062770 systemd-logind[1583]: Removed session 3. Sep 11 00:32:45.110083 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 33272 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:32:45.111666 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:45.115801 systemd-logind[1583]: New session 4 of user core. Sep 11 00:32:45.131454 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:32:45.183648 sshd[1753]: Connection closed by 10.0.0.1 port 33272 Sep 11 00:32:45.183944 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:45.193881 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:33272.service: Deactivated successfully. Sep 11 00:32:45.195595 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:32:45.196276 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:32:45.199040 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:33282.service - OpenSSH per-connection server daemon (10.0.0.1:33282). Sep 11 00:32:45.199808 systemd-logind[1583]: Removed session 4. Sep 11 00:32:45.256487 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 33282 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:32:45.257915 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:45.262288 systemd-logind[1583]: New session 5 of user core. Sep 11 00:32:45.274441 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:32:45.333083 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:32:45.333411 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:45.359254 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:45.360818 sshd[1761]: Connection closed by 10.0.0.1 port 33282 Sep 11 00:32:45.361216 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:45.374898 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:33282.service: Deactivated successfully. Sep 11 00:32:45.376548 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:32:45.377204 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:32:45.380062 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:33284.service - OpenSSH per-connection server daemon (10.0.0.1:33284). Sep 11 00:32:45.380709 systemd-logind[1583]: Removed session 5. Sep 11 00:32:45.435445 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 33284 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:32:45.436731 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:45.440767 systemd-logind[1583]: New session 6 of user core. Sep 11 00:32:45.451446 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:32:45.504078 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:32:45.504403 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:45.857767 sudo[1772]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:45.864453 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:32:45.864757 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:45.874598 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:32:45.924856 augenrules[1794]: No rules Sep 11 00:32:45.926534 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:32:45.926811 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:32:45.927961 sudo[1771]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:45.929483 sshd[1770]: Connection closed by 10.0.0.1 port 33284 Sep 11 00:32:45.929840 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:45.943017 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:33284.service: Deactivated successfully. Sep 11 00:32:45.944757 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:32:45.945465 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:32:45.948151 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:33294.service - OpenSSH per-connection server daemon (10.0.0.1:33294). Sep 11 00:32:45.948909 systemd-logind[1583]: Removed session 6. Sep 11 00:32:46.006890 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 33294 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:32:46.008140 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:46.012184 systemd-logind[1583]: New session 7 of user core. Sep 11 00:32:46.029457 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:32:46.081104 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:32:46.081429 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:46.761667 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:32:46.781639 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:32:47.325147 dockerd[1827]: time="2025-09-11T00:32:47.325072729Z" level=info msg="Starting up" Sep 11 00:32:47.325952 dockerd[1827]: time="2025-09-11T00:32:47.325929515Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:32:47.402669 dockerd[1827]: time="2025-09-11T00:32:47.402479504Z" level=info msg="Loading containers: start." Sep 11 00:32:47.412338 kernel: Initializing XFRM netlink socket Sep 11 00:32:47.652868 systemd-networkd[1499]: docker0: Link UP Sep 11 00:32:47.658337 dockerd[1827]: time="2025-09-11T00:32:47.658285502Z" level=info msg="Loading containers: done." Sep 11 00:32:47.676064 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4192104412-merged.mount: Deactivated successfully. Sep 11 00:32:47.677778 dockerd[1827]: time="2025-09-11T00:32:47.677744093Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:32:47.677890 dockerd[1827]: time="2025-09-11T00:32:47.677865441Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:32:47.677993 dockerd[1827]: time="2025-09-11T00:32:47.677976449Z" level=info msg="Initializing buildkit" Sep 11 00:32:47.708938 dockerd[1827]: time="2025-09-11T00:32:47.708905432Z" level=info msg="Completed buildkit initialization" Sep 11 00:32:47.713401 dockerd[1827]: time="2025-09-11T00:32:47.713379412Z" level=info msg="Daemon has completed initialization" Sep 11 00:32:47.713465 dockerd[1827]: time="2025-09-11T00:32:47.713421411Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:32:47.713643 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:32:48.697943 containerd[1595]: time="2025-09-11T00:32:48.697872909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 11 00:32:49.308199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2585459131.mount: Deactivated successfully. Sep 11 00:32:50.590831 containerd[1595]: time="2025-09-11T00:32:50.590742996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:50.591562 containerd[1595]: time="2025-09-11T00:32:50.591474949Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 11 00:32:50.598654 containerd[1595]: time="2025-09-11T00:32:50.598594020Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:50.601342 containerd[1595]: time="2025-09-11T00:32:50.601282311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:50.604874 containerd[1595]: time="2025-09-11T00:32:50.604825526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.906905619s" Sep 11 00:32:50.604874 containerd[1595]: time="2025-09-11T00:32:50.604871021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 11 00:32:50.605890 containerd[1595]: time="2025-09-11T00:32:50.605852572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 11 00:32:51.123368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:32:51.124989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:51.331301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:51.337046 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:32:51.585911 kubelet[2101]: E0911 00:32:51.585783 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:32:51.592531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:32:51.592723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:32:51.593091 systemd[1]: kubelet.service: Consumed 513ms CPU time, 108.9M memory peak. Sep 11 00:32:52.357282 containerd[1595]: time="2025-09-11T00:32:52.357211295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:52.357868 containerd[1595]: time="2025-09-11T00:32:52.357818253Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 11 00:32:52.358893 containerd[1595]: time="2025-09-11T00:32:52.358862111Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:52.361476 containerd[1595]: time="2025-09-11T00:32:52.361428584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:52.362310 containerd[1595]: time="2025-09-11T00:32:52.362278758Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.756388305s" Sep 11 00:32:52.362371 containerd[1595]: time="2025-09-11T00:32:52.362310147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 11 00:32:52.363056 containerd[1595]: time="2025-09-11T00:32:52.363012083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 11 00:32:54.215165 containerd[1595]: time="2025-09-11T00:32:54.215101061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:54.215722 containerd[1595]: time="2025-09-11T00:32:54.215685457Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 11 00:32:54.216865 containerd[1595]: time="2025-09-11T00:32:54.216830925Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:54.219098 containerd[1595]: time="2025-09-11T00:32:54.219076396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:54.220897 containerd[1595]: time="2025-09-11T00:32:54.220863748Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.857813334s" Sep 11 00:32:54.220897 containerd[1595]: time="2025-09-11T00:32:54.220896169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 11 00:32:54.221425 containerd[1595]: time="2025-09-11T00:32:54.221393412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 11 00:32:55.083646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772324227.mount: Deactivated successfully. Sep 11 00:32:55.345350 containerd[1595]: time="2025-09-11T00:32:55.345230607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:55.346156 containerd[1595]: time="2025-09-11T00:32:55.346118022Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 11 00:32:55.347290 containerd[1595]: time="2025-09-11T00:32:55.347256497Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:55.349037 containerd[1595]: time="2025-09-11T00:32:55.349008362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:55.349544 containerd[1595]: time="2025-09-11T00:32:55.349502970Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.12808384s" Sep 11 00:32:55.349571 containerd[1595]: time="2025-09-11T00:32:55.349542474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 11 00:32:55.350220 containerd[1595]: time="2025-09-11T00:32:55.350005011Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:32:55.804032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113423352.mount: Deactivated successfully. Sep 11 00:32:56.448493 containerd[1595]: time="2025-09-11T00:32:56.448430611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:56.449287 containerd[1595]: time="2025-09-11T00:32:56.449240540Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:32:56.450630 containerd[1595]: time="2025-09-11T00:32:56.450578890Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:56.452887 containerd[1595]: time="2025-09-11T00:32:56.452839750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:56.453781 containerd[1595]: time="2025-09-11T00:32:56.453660048Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.103626614s" Sep 11 00:32:56.453781 containerd[1595]: time="2025-09-11T00:32:56.453691868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:32:56.454383 containerd[1595]: time="2025-09-11T00:32:56.454352016Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:32:56.845910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617604049.mount: Deactivated successfully. Sep 11 00:32:56.852246 containerd[1595]: time="2025-09-11T00:32:56.852202157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:56.852957 containerd[1595]: time="2025-09-11T00:32:56.852918390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:32:56.854102 containerd[1595]: time="2025-09-11T00:32:56.854068207Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:56.855897 containerd[1595]: time="2025-09-11T00:32:56.855857702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:56.856503 containerd[1595]: time="2025-09-11T00:32:56.856463839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 402.078551ms" Sep 11 00:32:56.856537 containerd[1595]: time="2025-09-11T00:32:56.856505818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:32:56.856979 containerd[1595]: time="2025-09-11T00:32:56.856948218Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 11 00:32:57.315995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396912195.mount: Deactivated successfully. Sep 11 00:32:59.020757 containerd[1595]: time="2025-09-11T00:32:59.020697840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:59.021513 containerd[1595]: time="2025-09-11T00:32:59.021472493Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 11 00:32:59.022676 containerd[1595]: time="2025-09-11T00:32:59.022638350Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:59.025253 containerd[1595]: time="2025-09-11T00:32:59.025199022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:59.026345 containerd[1595]: time="2025-09-11T00:32:59.026282404Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.169308938s" Sep 11 00:32:59.026345 containerd[1595]: time="2025-09-11T00:32:59.026343248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 11 00:33:01.623411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:33:01.625301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:01.817004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:01.831626 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:33:02.072398 kubelet[2269]: E0911 00:33:02.072215 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:33:02.076896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:33:02.077115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:33:02.077484 systemd[1]: kubelet.service: Consumed 202ms CPU time, 110.9M memory peak. Sep 11 00:33:02.179919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:02.180137 systemd[1]: kubelet.service: Consumed 202ms CPU time, 110.9M memory peak. Sep 11 00:33:02.182385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:02.206707 systemd[1]: Reload requested from client PID 2285 ('systemctl') (unit session-7.scope)... Sep 11 00:33:02.206722 systemd[1]: Reloading... Sep 11 00:33:02.282357 zram_generator::config[2328]: No configuration found. Sep 11 00:33:02.510663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:33:02.626870 systemd[1]: Reloading finished in 419 ms. Sep 11 00:33:02.691025 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:33:02.691136 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:33:02.691497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:02.691550 systemd[1]: kubelet.service: Consumed 143ms CPU time, 98.3M memory peak. Sep 11 00:33:02.693308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:02.876090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:02.896622 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:33:02.938145 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:02.938145 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:33:02.938145 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:02.938580 kubelet[2377]: I0911 00:33:02.938186 2377 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:33:03.287859 kubelet[2377]: I0911 00:33:03.287737 2377 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:33:03.287859 kubelet[2377]: I0911 00:33:03.287770 2377 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:33:03.288085 kubelet[2377]: I0911 00:33:03.288050 2377 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:33:03.313052 kubelet[2377]: E0911 00:33:03.313002 2377 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:03.313755 kubelet[2377]: I0911 00:33:03.313725 2377 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:33:03.320954 kubelet[2377]: I0911 00:33:03.320927 2377 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:33:03.326791 kubelet[2377]: I0911 00:33:03.326753 2377 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:33:03.327271 kubelet[2377]: I0911 00:33:03.327243 2377 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:33:03.327421 kubelet[2377]: I0911 00:33:03.327385 2377 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:33:03.327609 kubelet[2377]: I0911 00:33:03.327415 2377 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:33:03.327710 kubelet[2377]: I0911 00:33:03.327611 2377 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:33:03.327710 kubelet[2377]: I0911 00:33:03.327620 2377 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:33:03.327761 kubelet[2377]: I0911 00:33:03.327730 2377 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:03.329605 kubelet[2377]: I0911 00:33:03.329575 2377 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:33:03.329605 kubelet[2377]: I0911 00:33:03.329598 2377 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:33:03.329667 kubelet[2377]: I0911 00:33:03.329632 2377 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:33:03.329667 kubelet[2377]: I0911 00:33:03.329647 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:33:03.331967 kubelet[2377]: I0911 00:33:03.331947 2377 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:33:03.332344 kubelet[2377]: I0911 00:33:03.332309 2377 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:33:03.332400 kubelet[2377]: W0911 00:33:03.332378 2377 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:33:03.333862 kubelet[2377]: W0911 00:33:03.333703 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Sep 11 00:33:03.333862 kubelet[2377]: E0911 00:33:03.333772 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:03.335258 kubelet[2377]: W0911 00:33:03.335218 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Sep 11 00:33:03.335378 kubelet[2377]: E0911 00:33:03.335354 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:03.335430 kubelet[2377]: I0911 00:33:03.335341 2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:33:03.335807 kubelet[2377]: I0911 00:33:03.335793 2377 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:33:03.335888 kubelet[2377]: I0911 00:33:03.335267 2377 server.go:1274] "Started kubelet" Sep 11 00:33:03.337625 kubelet[2377]: I0911 00:33:03.337590 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:33:03.338999 kubelet[2377]: I0911 00:33:03.338015 2377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:33:03.340167 kubelet[2377]: I0911 00:33:03.340140 2377 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:33:03.340383 kubelet[2377]: E0911 00:33:03.340337 2377 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:03.340667 kubelet[2377]: I0911 00:33:03.340637 2377 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:33:03.340722 kubelet[2377]: I0911 00:33:03.340689 2377 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:33:03.341171 kubelet[2377]: W0911 00:33:03.341133 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Sep 11 00:33:03.341236 kubelet[2377]: E0911 00:33:03.341175 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:03.341236 kubelet[2377]: E0911 00:33:03.341225 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Sep 11 00:33:03.341527 kubelet[2377]: I0911 00:33:03.341505 2377 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:33:03.341603 kubelet[2377]: I0911 00:33:03.341586 2377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:33:03.343055 kubelet[2377]: I0911 00:33:03.342875 2377 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:33:03.344126 kubelet[2377]: E0911 00:33:03.342905 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186413202363c82e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:33:03.33495915 +0000 UTC m=+0.434565336,LastTimestamp:2025-09-11 00:33:03.33495915 +0000 UTC m=+0.434565336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:33:03.344608 kubelet[2377]: I0911 00:33:03.344585 2377 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:33:03.345124 kubelet[2377]: I0911 00:33:03.345097 2377 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:33:03.347124 kubelet[2377]: E0911 00:33:03.347095 2377 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:33:03.357634 kubelet[2377]: I0911 00:33:03.357607 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:33:03.359329 kubelet[2377]: I0911 00:33:03.359043 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:33:03.359329 kubelet[2377]: I0911 00:33:03.359063 2377 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:33:03.359329 kubelet[2377]: I0911 00:33:03.359080 2377 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:33:03.359329 kubelet[2377]: E0911 00:33:03.359117 2377 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:33:03.361972 kubelet[2377]: I0911 00:33:03.361943 2377 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:33:03.361972 kubelet[2377]: I0911 00:33:03.361965 2377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:33:03.362038 kubelet[2377]: I0911 00:33:03.361984 2377 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:03.365004 kubelet[2377]: W0911 00:33:03.364949 2377 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Sep 11 00:33:03.365004 kubelet[2377]: E0911 00:33:03.365002 2377 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:03.365155 kubelet[2377]: I0911 00:33:03.365079 2377 policy_none.go:49] "None policy: Start" Sep 11 00:33:03.365646 kubelet[2377]: I0911 00:33:03.365617 2377 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:33:03.365738 kubelet[2377]: I0911 00:33:03.365722 2377 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:33:03.372593 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:33:03.385330 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:33:03.388268 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:33:03.404340 kubelet[2377]: I0911 00:33:03.404291 2377 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:33:03.405395 kubelet[2377]: I0911 00:33:03.404510 2377 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:33:03.405395 kubelet[2377]: I0911 00:33:03.404534 2377 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:33:03.405395 kubelet[2377]: I0911 00:33:03.404748 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:33:03.405813 kubelet[2377]: E0911 00:33:03.405794 2377 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:33:03.468975 systemd[1]: Created slice kubepods-burstable-pod89ee58a8132df457549d0f3f14101417.slice - libcontainer container kubepods-burstable-pod89ee58a8132df457549d0f3f14101417.slice. Sep 11 00:33:03.498438 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 11 00:33:03.506656 kubelet[2377]: I0911 00:33:03.506635 2377 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:33:03.506960 kubelet[2377]: E0911 00:33:03.506903 2377 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 11 00:33:03.509743 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 11 00:33:03.542075 kubelet[2377]: I0911 00:33:03.541923 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:03.542075 kubelet[2377]: I0911 00:33:03.541985 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:03.542075 kubelet[2377]: I0911 00:33:03.542008 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:03.542075 kubelet[2377]: I0911 00:33:03.542025 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89ee58a8132df457549d0f3f14101417-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee58a8132df457549d0f3f14101417\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:03.542075 kubelet[2377]: I0911 00:33:03.542041 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:03.542380 kubelet[2377]: I0911 00:33:03.542054 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:03.542380 kubelet[2377]: I0911 00:33:03.542069 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89ee58a8132df457549d0f3f14101417-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee58a8132df457549d0f3f14101417\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:03.542380 kubelet[2377]: I0911 00:33:03.542082 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89ee58a8132df457549d0f3f14101417-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89ee58a8132df457549d0f3f14101417\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:03.542380 kubelet[2377]: I0911 00:33:03.542097 2377 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:03.542380 kubelet[2377]: E0911 00:33:03.542242 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Sep 11 00:33:03.708353 kubelet[2377]: I0911 00:33:03.708308 2377 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:33:03.708708 kubelet[2377]: E0911 00:33:03.708673 2377 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 11 00:33:03.796363 kubelet[2377]: E0911 00:33:03.796269 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:03.798350 containerd[1595]: time="2025-09-11T00:33:03.797666863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89ee58a8132df457549d0f3f14101417,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:03.807422 kubelet[2377]: E0911 00:33:03.807400 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:03.807814 containerd[1595]: time="2025-09-11T00:33:03.807758748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:03.812017 kubelet[2377]: E0911 00:33:03.811974 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:03.812394 containerd[1595]: time="2025-09-11T00:33:03.812358244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:03.832104 containerd[1595]: time="2025-09-11T00:33:03.832048400Z" level=info msg="connecting to shim 8f9b3988b6cec170b938a3b42b692973a613e17810c5b9a3a6d0149dbbaca522" address="unix:///run/containerd/s/baaa1bdda9f9eeff6b92ed99167282bf0119582ccdbd2bd70753714c71d6af9a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:03.850340 containerd[1595]: time="2025-09-11T00:33:03.850202676Z" level=info msg="connecting to shim 88c73a07348d668261e1942ebdc0c9f71cec6103977b941e0776e9cce44ac9d9" address="unix:///run/containerd/s/24454c6c89ebf703cfffba50dc9a904cdf2df87b0bec3b39dbc3f8eebc0ff36a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:03.859516 containerd[1595]: time="2025-09-11T00:33:03.859461500Z" level=info msg="connecting to shim aaa2e7f42063336eea5125ea76a51235e539754c754dab8c323cc65924f08ed7" address="unix:///run/containerd/s/8e390f9e458da87a2be2b395053eae7a1c2181ecb5110fe3fedc65d42dce532c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:03.866485 systemd[1]: Started cri-containerd-8f9b3988b6cec170b938a3b42b692973a613e17810c5b9a3a6d0149dbbaca522.scope - libcontainer container 8f9b3988b6cec170b938a3b42b692973a613e17810c5b9a3a6d0149dbbaca522. Sep 11 00:33:03.873474 systemd[1]: Started cri-containerd-88c73a07348d668261e1942ebdc0c9f71cec6103977b941e0776e9cce44ac9d9.scope - libcontainer container 88c73a07348d668261e1942ebdc0c9f71cec6103977b941e0776e9cce44ac9d9. Sep 11 00:33:03.890455 systemd[1]: Started cri-containerd-aaa2e7f42063336eea5125ea76a51235e539754c754dab8c323cc65924f08ed7.scope - libcontainer container aaa2e7f42063336eea5125ea76a51235e539754c754dab8c323cc65924f08ed7. Sep 11 00:33:03.933346 containerd[1595]: time="2025-09-11T00:33:03.931986820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"88c73a07348d668261e1942ebdc0c9f71cec6103977b941e0776e9cce44ac9d9\"" Sep 11 00:33:03.933473 kubelet[2377]: E0911 00:33:03.932853 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:03.935860 containerd[1595]: time="2025-09-11T00:33:03.935813126Z" level=info msg="CreateContainer within sandbox \"88c73a07348d668261e1942ebdc0c9f71cec6103977b941e0776e9cce44ac9d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:33:03.941695 containerd[1595]: time="2025-09-11T00:33:03.941670010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:89ee58a8132df457549d0f3f14101417,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f9b3988b6cec170b938a3b42b692973a613e17810c5b9a3a6d0149dbbaca522\"" Sep 11 00:33:03.942117 kubelet[2377]: E0911 00:33:03.942094 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:03.943139 kubelet[2377]: E0911 00:33:03.943113 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Sep 11 00:33:03.943783 containerd[1595]: time="2025-09-11T00:33:03.943687453Z" level=info msg="CreateContainer within sandbox \"8f9b3988b6cec170b938a3b42b692973a613e17810c5b9a3a6d0149dbbaca522\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:33:03.951507 containerd[1595]: time="2025-09-11T00:33:03.951477482Z" level=info msg="Container 147042415b1df90eaaa9cd8a281186578285bfbdf2b2945911184a5d83ea72f2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:03.957664 containerd[1595]: time="2025-09-11T00:33:03.957284703Z" level=info msg="Container 9cdbd836b78fd13357477c2be56ed6f7f98746d65a4cd27b215e9187df82700f: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:03.957704 containerd[1595]: time="2025-09-11T00:33:03.957682078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaa2e7f42063336eea5125ea76a51235e539754c754dab8c323cc65924f08ed7\"" Sep 11 00:33:03.958123 kubelet[2377]: E0911 00:33:03.958100 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:03.959466 containerd[1595]: time="2025-09-11T00:33:03.959445926Z" level=info msg="CreateContainer within sandbox \"aaa2e7f42063336eea5125ea76a51235e539754c754dab8c323cc65924f08ed7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:33:03.964774 containerd[1595]: time="2025-09-11T00:33:03.964735015Z" level=info msg="CreateContainer within sandbox \"88c73a07348d668261e1942ebdc0c9f71cec6103977b941e0776e9cce44ac9d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9cdbd836b78fd13357477c2be56ed6f7f98746d65a4cd27b215e9187df82700f\"" Sep 11 00:33:03.965165 containerd[1595]: time="2025-09-11T00:33:03.965142659Z" level=info msg="StartContainer for \"9cdbd836b78fd13357477c2be56ed6f7f98746d65a4cd27b215e9187df82700f\"" Sep 11 00:33:03.966091 containerd[1595]: time="2025-09-11T00:33:03.966068305Z" level=info msg="connecting to shim 9cdbd836b78fd13357477c2be56ed6f7f98746d65a4cd27b215e9187df82700f" address="unix:///run/containerd/s/24454c6c89ebf703cfffba50dc9a904cdf2df87b0bec3b39dbc3f8eebc0ff36a" protocol=ttrpc version=3 Sep 11 00:33:03.967997 containerd[1595]: time="2025-09-11T00:33:03.967971424Z" level=info msg="CreateContainer within sandbox \"8f9b3988b6cec170b938a3b42b692973a613e17810c5b9a3a6d0149dbbaca522\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"147042415b1df90eaaa9cd8a281186578285bfbdf2b2945911184a5d83ea72f2\"" Sep 11 00:33:03.968328 containerd[1595]: time="2025-09-11T00:33:03.968278340Z" level=info msg="StartContainer for \"147042415b1df90eaaa9cd8a281186578285bfbdf2b2945911184a5d83ea72f2\"" Sep 11 00:33:03.969814 containerd[1595]: time="2025-09-11T00:33:03.969794303Z" level=info msg="connecting to shim 147042415b1df90eaaa9cd8a281186578285bfbdf2b2945911184a5d83ea72f2" address="unix:///run/containerd/s/baaa1bdda9f9eeff6b92ed99167282bf0119582ccdbd2bd70753714c71d6af9a" protocol=ttrpc version=3 Sep 11 00:33:03.971356 containerd[1595]: time="2025-09-11T00:33:03.970799298Z" level=info msg="Container 5d7504b258789fb50777c349fd2d6300553b7d62ac074bb6153d8109d6dd06d2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:03.983159 containerd[1595]: time="2025-09-11T00:33:03.983137206Z" level=info msg="CreateContainer within sandbox \"aaa2e7f42063336eea5125ea76a51235e539754c754dab8c323cc65924f08ed7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d7504b258789fb50777c349fd2d6300553b7d62ac074bb6153d8109d6dd06d2\"" Sep 11 00:33:03.984115 containerd[1595]: time="2025-09-11T00:33:03.984082749Z" level=info msg="StartContainer for \"5d7504b258789fb50777c349fd2d6300553b7d62ac074bb6153d8109d6dd06d2\"" Sep 11 00:33:03.985520 systemd[1]: Started cri-containerd-9cdbd836b78fd13357477c2be56ed6f7f98746d65a4cd27b215e9187df82700f.scope - libcontainer container 9cdbd836b78fd13357477c2be56ed6f7f98746d65a4cd27b215e9187df82700f. Sep 11 00:33:03.987118 containerd[1595]: time="2025-09-11T00:33:03.987096040Z" level=info msg="connecting to shim 5d7504b258789fb50777c349fd2d6300553b7d62ac074bb6153d8109d6dd06d2" address="unix:///run/containerd/s/8e390f9e458da87a2be2b395053eae7a1c2181ecb5110fe3fedc65d42dce532c" protocol=ttrpc version=3 Sep 11 00:33:03.989907 systemd[1]: Started cri-containerd-147042415b1df90eaaa9cd8a281186578285bfbdf2b2945911184a5d83ea72f2.scope - libcontainer container 147042415b1df90eaaa9cd8a281186578285bfbdf2b2945911184a5d83ea72f2. Sep 11 00:33:04.009463 systemd[1]: Started cri-containerd-5d7504b258789fb50777c349fd2d6300553b7d62ac074bb6153d8109d6dd06d2.scope - libcontainer container 5d7504b258789fb50777c349fd2d6300553b7d62ac074bb6153d8109d6dd06d2. Sep 11 00:33:04.110192 kubelet[2377]: I0911 00:33:04.110079 2377 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:33:04.111538 kubelet[2377]: E0911 00:33:04.110779 2377 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 11 00:33:04.651975 containerd[1595]: time="2025-09-11T00:33:04.651670675Z" level=info msg="StartContainer for \"9cdbd836b78fd13357477c2be56ed6f7f98746d65a4cd27b215e9187df82700f\" returns successfully" Sep 11 00:33:04.652523 containerd[1595]: time="2025-09-11T00:33:04.652505260Z" level=info msg="StartContainer for \"147042415b1df90eaaa9cd8a281186578285bfbdf2b2945911184a5d83ea72f2\" returns successfully" Sep 11 00:33:04.653064 containerd[1595]: time="2025-09-11T00:33:04.652847211Z" level=info msg="StartContainer for \"5d7504b258789fb50777c349fd2d6300553b7d62ac074bb6153d8109d6dd06d2\" returns successfully" Sep 11 00:33:04.657956 kubelet[2377]: E0911 00:33:04.657936 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:04.658389 kubelet[2377]: E0911 00:33:04.658340 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:04.903515 kubelet[2377]: E0911 00:33:04.902834 2377 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:33:04.914266 kubelet[2377]: I0911 00:33:04.914231 2377 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:33:05.023903 kubelet[2377]: I0911 00:33:05.023859 2377 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:33:05.331122 kubelet[2377]: I0911 00:33:05.331013 2377 apiserver.go:52] "Watching apiserver" Sep 11 00:33:05.341733 kubelet[2377]: I0911 00:33:05.341701 2377 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:33:05.662719 kubelet[2377]: E0911 00:33:05.662679 2377 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:05.662719 kubelet[2377]: E0911 00:33:05.662709 2377 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:05.662906 kubelet[2377]: E0911 00:33:05.662815 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:05.662906 kubelet[2377]: E0911 00:33:05.662871 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:06.664378 kubelet[2377]: E0911 00:33:06.664343 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:06.834846 systemd[1]: Reload requested from client PID 2652 ('systemctl') (unit session-7.scope)... Sep 11 00:33:06.834861 systemd[1]: Reloading... Sep 11 00:33:06.918353 zram_generator::config[2699]: No configuration found. Sep 11 00:33:07.007121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:33:07.136164 systemd[1]: Reloading finished in 300 ms. Sep 11 00:33:07.170111 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:07.195720 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:33:07.196061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:07.196110 systemd[1]: kubelet.service: Consumed 850ms CPU time, 129.7M memory peak. Sep 11 00:33:07.197862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:07.417234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:07.426624 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:33:07.464552 kubelet[2740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:07.464552 kubelet[2740]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 00:33:07.464552 kubelet[2740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:07.465023 kubelet[2740]: I0911 00:33:07.464603 2740 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:33:07.472094 kubelet[2740]: I0911 00:33:07.472056 2740 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 00:33:07.472094 kubelet[2740]: I0911 00:33:07.472084 2740 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:33:07.472396 kubelet[2740]: I0911 00:33:07.472374 2740 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 00:33:07.473605 kubelet[2740]: I0911 00:33:07.473588 2740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:33:07.475406 kubelet[2740]: I0911 00:33:07.475368 2740 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:33:07.479004 kubelet[2740]: I0911 00:33:07.478971 2740 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:33:07.488263 kubelet[2740]: I0911 00:33:07.488236 2740 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:33:07.489755 kubelet[2740]: I0911 00:33:07.489693 2740 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 00:33:07.489956 kubelet[2740]: I0911 00:33:07.489929 2740 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:33:07.490267 kubelet[2740]: I0911 00:33:07.489955 2740 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:33:07.490374 kubelet[2740]: I0911 00:33:07.490269 2740 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:33:07.490374 kubelet[2740]: I0911 00:33:07.490282 2740 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 00:33:07.490374 kubelet[2740]: I0911 00:33:07.490308 2740 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:07.490539 kubelet[2740]: I0911 00:33:07.490519 2740 kubelet.go:408] "Attempting to sync node with API server" Sep 11 00:33:07.490565 kubelet[2740]: I0911 00:33:07.490544 2740 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:33:07.490590 kubelet[2740]: I0911 00:33:07.490586 2740 kubelet.go:314] "Adding apiserver pod source" Sep 11 00:33:07.490614 kubelet[2740]: I0911 00:33:07.490599 2740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:33:07.491701 kubelet[2740]: I0911 00:33:07.491671 2740 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:33:07.492306 kubelet[2740]: I0911 00:33:07.492281 2740 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:33:07.492734 kubelet[2740]: I0911 00:33:07.492703 2740 server.go:1274] "Started kubelet" Sep 11 00:33:07.493561 kubelet[2740]: I0911 00:33:07.493527 2740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:33:07.493843 kubelet[2740]: I0911 00:33:07.493781 2740 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:33:07.493843 kubelet[2740]: I0911 00:33:07.493841 2740 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:33:07.493914 kubelet[2740]: I0911 00:33:07.493845 2740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:33:07.496521 kubelet[2740]: I0911 00:33:07.494052 2740 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:33:07.496521 kubelet[2740]: I0911 00:33:07.494967 2740 server.go:449] "Adding debug handlers to kubelet server" Sep 11 00:33:07.502149 kubelet[2740]: I0911 00:33:07.500973 2740 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 00:33:07.502149 kubelet[2740]: I0911 00:33:07.501076 2740 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 00:33:07.502149 kubelet[2740]: I0911 00:33:07.501202 2740 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:33:07.502149 kubelet[2740]: I0911 00:33:07.501769 2740 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:33:07.502149 kubelet[2740]: I0911 00:33:07.501864 2740 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:33:07.503666 kubelet[2740]: E0911 00:33:07.503636 2740 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:33:07.503778 kubelet[2740]: I0911 00:33:07.503758 2740 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:33:07.509392 kubelet[2740]: I0911 00:33:07.509363 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:33:07.511990 kubelet[2740]: I0911 00:33:07.511681 2740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:33:07.511990 kubelet[2740]: I0911 00:33:07.511701 2740 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 00:33:07.511990 kubelet[2740]: I0911 00:33:07.511719 2740 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 00:33:07.511990 kubelet[2740]: E0911 00:33:07.511766 2740 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:33:07.541029 kubelet[2740]: I0911 00:33:07.541003 2740 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 00:33:07.541029 kubelet[2740]: I0911 00:33:07.541019 2740 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 00:33:07.541104 kubelet[2740]: I0911 00:33:07.541036 2740 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:07.541179 kubelet[2740]: I0911 00:33:07.541159 2740 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:33:07.541206 kubelet[2740]: I0911 00:33:07.541173 2740 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:33:07.541206 kubelet[2740]: I0911 00:33:07.541189 2740 policy_none.go:49] "None policy: Start" Sep 11 00:33:07.541734 kubelet[2740]: I0911 00:33:07.541713 2740 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 00:33:07.541734 kubelet[2740]: I0911 00:33:07.541731 2740 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:33:07.541877 kubelet[2740]: I0911 00:33:07.541853 2740 state_mem.go:75] "Updated machine memory state" Sep 11 00:33:07.545832 kubelet[2740]: I0911 00:33:07.545803 2740 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:33:07.545978 kubelet[2740]: I0911 00:33:07.545956 2740 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:33:07.546003 kubelet[2740]: I0911 00:33:07.545971 2740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:33:07.546127 kubelet[2740]: I0911 00:33:07.546101 2740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:33:07.619626 kubelet[2740]: E0911 00:33:07.619585 2740 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:07.652814 kubelet[2740]: I0911 00:33:07.652781 2740 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 00:33:07.658739 kubelet[2740]: I0911 00:33:07.658706 2740 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 11 00:33:07.658904 kubelet[2740]: I0911 00:33:07.658768 2740 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 00:33:07.702817 kubelet[2740]: I0911 00:33:07.702710 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89ee58a8132df457549d0f3f14101417-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee58a8132df457549d0f3f14101417\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:07.702817 kubelet[2740]: I0911 00:33:07.702742 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89ee58a8132df457549d0f3f14101417-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"89ee58a8132df457549d0f3f14101417\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:07.702817 kubelet[2740]: I0911 00:33:07.702764 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89ee58a8132df457549d0f3f14101417-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"89ee58a8132df457549d0f3f14101417\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:07.702817 kubelet[2740]: I0911 00:33:07.702784 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:07.702817 kubelet[2740]: I0911 00:33:07.702808 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:07.702980 kubelet[2740]: I0911 00:33:07.702824 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:07.702980 kubelet[2740]: I0911 00:33:07.702840 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:07.702980 kubelet[2740]: I0911 00:33:07.702854 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:07.702980 kubelet[2740]: I0911 00:33:07.702874 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:07.917596 kubelet[2740]: E0911 00:33:07.917566 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:07.920869 kubelet[2740]: E0911 00:33:07.920720 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:07.920869 kubelet[2740]: E0911 00:33:07.920740 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:08.491960 kubelet[2740]: I0911 00:33:08.491926 2740 apiserver.go:52] "Watching apiserver" Sep 11 00:33:08.501921 kubelet[2740]: I0911 00:33:08.501882 2740 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 00:33:08.523727 kubelet[2740]: E0911 00:33:08.523693 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:08.524111 kubelet[2740]: E0911 00:33:08.524088 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:08.532334 kubelet[2740]: E0911 00:33:08.531467 2740 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:08.532334 kubelet[2740]: E0911 00:33:08.531620 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:08.573432 kubelet[2740]: I0911 00:33:08.573307 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5732919239999998 podStartE2EDuration="1.573291924s" podCreationTimestamp="2025-09-11 00:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:08.565894321 +0000 UTC m=+1.135601359" watchObservedRunningTime="2025-09-11 00:33:08.573291924 +0000 UTC m=+1.142998962" Sep 11 00:33:08.580067 kubelet[2740]: I0911 00:33:08.579819 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5798104889999998 podStartE2EDuration="2.579810489s" podCreationTimestamp="2025-09-11 00:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:08.573514792 +0000 UTC m=+1.143221830" watchObservedRunningTime="2025-09-11 00:33:08.579810489 +0000 UTC m=+1.149517527" Sep 11 00:33:08.587999 kubelet[2740]: I0911 00:33:08.587947 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5879313879999999 podStartE2EDuration="1.587931388s" podCreationTimestamp="2025-09-11 00:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:08.579977532 +0000 UTC m=+1.149684570" watchObservedRunningTime="2025-09-11 00:33:08.587931388 +0000 UTC m=+1.157638426" Sep 11 00:33:09.525426 kubelet[2740]: E0911 00:33:09.525397 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:09.525831 kubelet[2740]: E0911 00:33:09.525530 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:10.261032 kubelet[2740]: E0911 00:33:10.260998 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:10.527467 kubelet[2740]: E0911 00:33:10.527342 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:13.679395 kubelet[2740]: I0911 00:33:13.679289 2740 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:33:13.680120 containerd[1595]: time="2025-09-11T00:33:13.680077924Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:33:13.680560 kubelet[2740]: I0911 00:33:13.680362 2740 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:33:14.568234 systemd[1]: Created slice kubepods-besteffort-pod35edc47f_42a0_45e8_81df_0734685e08ba.slice - libcontainer container kubepods-besteffort-pod35edc47f_42a0_45e8_81df_0734685e08ba.slice. Sep 11 00:33:14.650671 kubelet[2740]: I0911 00:33:14.650624 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m52t\" (UniqueName: \"kubernetes.io/projected/35edc47f-42a0-45e8-81df-0734685e08ba-kube-api-access-9m52t\") pod \"kube-proxy-wpqz8\" (UID: \"35edc47f-42a0-45e8-81df-0734685e08ba\") " pod="kube-system/kube-proxy-wpqz8" Sep 11 00:33:14.650671 kubelet[2740]: I0911 00:33:14.650664 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35edc47f-42a0-45e8-81df-0734685e08ba-kube-proxy\") pod \"kube-proxy-wpqz8\" (UID: \"35edc47f-42a0-45e8-81df-0734685e08ba\") " pod="kube-system/kube-proxy-wpqz8" Sep 11 00:33:14.650671 kubelet[2740]: I0911 00:33:14.650681 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35edc47f-42a0-45e8-81df-0734685e08ba-xtables-lock\") pod \"kube-proxy-wpqz8\" (UID: \"35edc47f-42a0-45e8-81df-0734685e08ba\") " pod="kube-system/kube-proxy-wpqz8" Sep 11 00:33:14.650865 kubelet[2740]: I0911 00:33:14.650695 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35edc47f-42a0-45e8-81df-0734685e08ba-lib-modules\") pod \"kube-proxy-wpqz8\" (UID: \"35edc47f-42a0-45e8-81df-0734685e08ba\") " pod="kube-system/kube-proxy-wpqz8" Sep 11 00:33:14.786388 systemd[1]: Created slice kubepods-besteffort-pod3920e548_5650_4e6f_9760_4d9954fb347a.slice - libcontainer container kubepods-besteffort-pod3920e548_5650_4e6f_9760_4d9954fb347a.slice. Sep 11 00:33:14.852260 kubelet[2740]: I0911 00:33:14.852136 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3920e548-5650-4e6f-9760-4d9954fb347a-var-lib-calico\") pod \"tigera-operator-58fc44c59b-vx2vm\" (UID: \"3920e548-5650-4e6f-9760-4d9954fb347a\") " pod="tigera-operator/tigera-operator-58fc44c59b-vx2vm" Sep 11 00:33:14.852260 kubelet[2740]: I0911 00:33:14.852173 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nk5w\" (UniqueName: \"kubernetes.io/projected/3920e548-5650-4e6f-9760-4d9954fb347a-kube-api-access-4nk5w\") pod \"tigera-operator-58fc44c59b-vx2vm\" (UID: \"3920e548-5650-4e6f-9760-4d9954fb347a\") " pod="tigera-operator/tigera-operator-58fc44c59b-vx2vm" Sep 11 00:33:14.877363 kubelet[2740]: E0911 00:33:14.877304 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:14.877917 containerd[1595]: time="2025-09-11T00:33:14.877815484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpqz8,Uid:35edc47f-42a0-45e8-81df-0734685e08ba,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:14.897573 containerd[1595]: time="2025-09-11T00:33:14.897506738Z" level=info msg="connecting to shim 47b56583419590ee7b7365a75474d0e3f78a75e06565669a4136f7ff811190e3" address="unix:///run/containerd/s/64b67b53660ac82514bdddd0fb4cf070a2589d2d4c80dd00599a535cfa7236b9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:14.924483 systemd[1]: Started cri-containerd-47b56583419590ee7b7365a75474d0e3f78a75e06565669a4136f7ff811190e3.scope - libcontainer container 47b56583419590ee7b7365a75474d0e3f78a75e06565669a4136f7ff811190e3. Sep 11 00:33:14.948828 containerd[1595]: time="2025-09-11T00:33:14.948771548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpqz8,Uid:35edc47f-42a0-45e8-81df-0734685e08ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"47b56583419590ee7b7365a75474d0e3f78a75e06565669a4136f7ff811190e3\"" Sep 11 00:33:14.949530 kubelet[2740]: E0911 00:33:14.949502 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:14.952298 containerd[1595]: time="2025-09-11T00:33:14.951582678Z" level=info msg="CreateContainer within sandbox \"47b56583419590ee7b7365a75474d0e3f78a75e06565669a4136f7ff811190e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:33:14.965022 containerd[1595]: time="2025-09-11T00:33:14.963946395Z" level=info msg="Container 5cfbc649fbe2eb904b4e36ac3cd14a9168b6a158b87477fa08eac71b684a7695: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:14.972356 containerd[1595]: time="2025-09-11T00:33:14.972327778Z" level=info msg="CreateContainer within sandbox \"47b56583419590ee7b7365a75474d0e3f78a75e06565669a4136f7ff811190e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5cfbc649fbe2eb904b4e36ac3cd14a9168b6a158b87477fa08eac71b684a7695\"" Sep 11 00:33:14.973658 containerd[1595]: time="2025-09-11T00:33:14.972860352Z" level=info msg="StartContainer for \"5cfbc649fbe2eb904b4e36ac3cd14a9168b6a158b87477fa08eac71b684a7695\"" Sep 11 00:33:14.974050 containerd[1595]: time="2025-09-11T00:33:14.974018807Z" level=info msg="connecting to shim 5cfbc649fbe2eb904b4e36ac3cd14a9168b6a158b87477fa08eac71b684a7695" address="unix:///run/containerd/s/64b67b53660ac82514bdddd0fb4cf070a2589d2d4c80dd00599a535cfa7236b9" protocol=ttrpc version=3 Sep 11 00:33:14.998442 systemd[1]: Started cri-containerd-5cfbc649fbe2eb904b4e36ac3cd14a9168b6a158b87477fa08eac71b684a7695.scope - libcontainer container 5cfbc649fbe2eb904b4e36ac3cd14a9168b6a158b87477fa08eac71b684a7695. Sep 11 00:33:15.038560 containerd[1595]: time="2025-09-11T00:33:15.038522967Z" level=info msg="StartContainer for \"5cfbc649fbe2eb904b4e36ac3cd14a9168b6a158b87477fa08eac71b684a7695\" returns successfully" Sep 11 00:33:15.089584 containerd[1595]: time="2025-09-11T00:33:15.089536007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-vx2vm,Uid:3920e548-5650-4e6f-9760-4d9954fb347a,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:33:15.111351 containerd[1595]: time="2025-09-11T00:33:15.110211212Z" level=info msg="connecting to shim b0bdbabf9a887f2334a4634d7b325a08370673fbaded4fa45e09003ed2eb35b6" address="unix:///run/containerd/s/583a8f8fd892118bcc382fc8339611e011aff82170a0aa79b17b12a848e81794" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:15.135430 systemd[1]: Started cri-containerd-b0bdbabf9a887f2334a4634d7b325a08370673fbaded4fa45e09003ed2eb35b6.scope - libcontainer container b0bdbabf9a887f2334a4634d7b325a08370673fbaded4fa45e09003ed2eb35b6. Sep 11 00:33:15.180508 containerd[1595]: time="2025-09-11T00:33:15.180474342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-vx2vm,Uid:3920e548-5650-4e6f-9760-4d9954fb347a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b0bdbabf9a887f2334a4634d7b325a08370673fbaded4fa45e09003ed2eb35b6\"" Sep 11 00:33:15.182120 containerd[1595]: time="2025-09-11T00:33:15.182089802Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:33:15.535132 kubelet[2740]: E0911 00:33:15.535101 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:15.543000 kubelet[2740]: I0911 00:33:15.542948 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wpqz8" podStartSLOduration=1.542932451 podStartE2EDuration="1.542932451s" podCreationTimestamp="2025-09-11 00:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:15.542479863 +0000 UTC m=+8.112186891" watchObservedRunningTime="2025-09-11 00:33:15.542932451 +0000 UTC m=+8.112639489" Sep 11 00:33:16.505943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655466910.mount: Deactivated successfully. Sep 11 00:33:17.429616 containerd[1595]: time="2025-09-11T00:33:17.429566861Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:17.430377 containerd[1595]: time="2025-09-11T00:33:17.430330633Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:33:17.431477 containerd[1595]: time="2025-09-11T00:33:17.431445987Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:17.433455 containerd[1595]: time="2025-09-11T00:33:17.433429522Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:17.433989 containerd[1595]: time="2025-09-11T00:33:17.433957131Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.251832163s" Sep 11 00:33:17.434026 containerd[1595]: time="2025-09-11T00:33:17.433992960Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:33:17.435616 containerd[1595]: time="2025-09-11T00:33:17.435590216Z" level=info msg="CreateContainer within sandbox \"b0bdbabf9a887f2334a4634d7b325a08370673fbaded4fa45e09003ed2eb35b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:33:17.443647 containerd[1595]: time="2025-09-11T00:33:17.443615911Z" level=info msg="Container c7e118633f9f3feeb4062788b0a69c80681c9cb143c652c48a1e51532fbaf599: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:17.449724 containerd[1595]: time="2025-09-11T00:33:17.449690424Z" level=info msg="CreateContainer within sandbox \"b0bdbabf9a887f2334a4634d7b325a08370673fbaded4fa45e09003ed2eb35b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c7e118633f9f3feeb4062788b0a69c80681c9cb143c652c48a1e51532fbaf599\"" Sep 11 00:33:17.450170 containerd[1595]: time="2025-09-11T00:33:17.450137579Z" level=info msg="StartContainer for \"c7e118633f9f3feeb4062788b0a69c80681c9cb143c652c48a1e51532fbaf599\"" Sep 11 00:33:17.451024 containerd[1595]: time="2025-09-11T00:33:17.451002144Z" level=info msg="connecting to shim c7e118633f9f3feeb4062788b0a69c80681c9cb143c652c48a1e51532fbaf599" address="unix:///run/containerd/s/583a8f8fd892118bcc382fc8339611e011aff82170a0aa79b17b12a848e81794" protocol=ttrpc version=3 Sep 11 00:33:17.500453 systemd[1]: Started cri-containerd-c7e118633f9f3feeb4062788b0a69c80681c9cb143c652c48a1e51532fbaf599.scope - libcontainer container c7e118633f9f3feeb4062788b0a69c80681c9cb143c652c48a1e51532fbaf599. Sep 11 00:33:17.529552 containerd[1595]: time="2025-09-11T00:33:17.529515321Z" level=info msg="StartContainer for \"c7e118633f9f3feeb4062788b0a69c80681c9cb143c652c48a1e51532fbaf599\" returns successfully" Sep 11 00:33:19.114485 kubelet[2740]: E0911 00:33:19.114445 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:19.124337 kubelet[2740]: I0911 00:33:19.124272 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-vx2vm" podStartSLOduration=2.871258493 podStartE2EDuration="5.12425716s" podCreationTimestamp="2025-09-11 00:33:14 +0000 UTC" firstStartedPulling="2025-09-11 00:33:15.181595373 +0000 UTC m=+7.751302401" lastFinishedPulling="2025-09-11 00:33:17.43459403 +0000 UTC m=+10.004301068" observedRunningTime="2025-09-11 00:33:17.550742555 +0000 UTC m=+10.120449593" watchObservedRunningTime="2025-09-11 00:33:19.12425716 +0000 UTC m=+11.693964198" Sep 11 00:33:19.261631 kubelet[2740]: E0911 00:33:19.261591 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:20.265393 kubelet[2740]: E0911 00:33:20.265357 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:22.596510 sudo[1806]: pam_unix(sudo:session): session closed for user root Sep 11 00:33:22.599046 sshd[1805]: Connection closed by 10.0.0.1 port 33294 Sep 11 00:33:22.599548 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:22.604138 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:33294.service: Deactivated successfully. Sep 11 00:33:22.606764 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:33:22.606986 systemd[1]: session-7.scope: Consumed 5.671s CPU time, 228.1M memory peak. Sep 11 00:33:22.608600 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:33:22.609660 systemd-logind[1583]: Removed session 7. Sep 11 00:33:23.643997 update_engine[1587]: I20250911 00:33:23.643357 1587 update_attempter.cc:509] Updating boot flags... Sep 11 00:33:25.012954 systemd[1]: Created slice kubepods-besteffort-podcc46e1e6_1c81_4b9d_ba34_22451aa18502.slice - libcontainer container kubepods-besteffort-podcc46e1e6_1c81_4b9d_ba34_22451aa18502.slice. Sep 11 00:33:25.018252 kubelet[2740]: I0911 00:33:25.016593 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2np\" (UniqueName: \"kubernetes.io/projected/cc46e1e6-1c81-4b9d-ba34-22451aa18502-kube-api-access-xr2np\") pod \"calico-typha-6b9bddfc8-wptk7\" (UID: \"cc46e1e6-1c81-4b9d-ba34-22451aa18502\") " pod="calico-system/calico-typha-6b9bddfc8-wptk7" Sep 11 00:33:25.018536 kubelet[2740]: I0911 00:33:25.016648 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc46e1e6-1c81-4b9d-ba34-22451aa18502-typha-certs\") pod \"calico-typha-6b9bddfc8-wptk7\" (UID: \"cc46e1e6-1c81-4b9d-ba34-22451aa18502\") " pod="calico-system/calico-typha-6b9bddfc8-wptk7" Sep 11 00:33:25.018536 kubelet[2740]: I0911 00:33:25.018366 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc46e1e6-1c81-4b9d-ba34-22451aa18502-tigera-ca-bundle\") pod \"calico-typha-6b9bddfc8-wptk7\" (UID: \"cc46e1e6-1c81-4b9d-ba34-22451aa18502\") " pod="calico-system/calico-typha-6b9bddfc8-wptk7" Sep 11 00:33:25.319847 kubelet[2740]: E0911 00:33:25.319716 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:25.321116 containerd[1595]: time="2025-09-11T00:33:25.320287255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9bddfc8-wptk7,Uid:cc46e1e6-1c81-4b9d-ba34-22451aa18502,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:25.460822 systemd[1]: Created slice kubepods-besteffort-pod05a23cd0_dcbc_489b_ab6a_f0687b9f4791.slice - libcontainer container kubepods-besteffort-pod05a23cd0_dcbc_489b_ab6a_f0687b9f4791.slice. Sep 11 00:33:25.463729 containerd[1595]: time="2025-09-11T00:33:25.463606764Z" level=info msg="connecting to shim 4a2f6fce91ba67ad3038e4cf805dac1152ad6acf28dda98f2e2918dce816d46f" address="unix:///run/containerd/s/0a9060c7c052cf527c2ec03a9efd53d7d3fe32e0131b99c6763212bf2e757d2a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:25.491539 systemd[1]: Started cri-containerd-4a2f6fce91ba67ad3038e4cf805dac1152ad6acf28dda98f2e2918dce816d46f.scope - libcontainer container 4a2f6fce91ba67ad3038e4cf805dac1152ad6acf28dda98f2e2918dce816d46f. Sep 11 00:33:25.522425 kubelet[2740]: I0911 00:33:25.522391 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-xtables-lock\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522843 kubelet[2740]: I0911 00:33:25.522611 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-cni-bin-dir\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522843 kubelet[2740]: I0911 00:33:25.522635 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-lib-modules\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522843 kubelet[2740]: I0911 00:33:25.522651 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-policysync\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522843 kubelet[2740]: I0911 00:33:25.522664 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-cni-net-dir\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522843 kubelet[2740]: I0911 00:33:25.522680 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-flexvol-driver-host\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522972 kubelet[2740]: I0911 00:33:25.522723 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-node-certs\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522972 kubelet[2740]: I0911 00:33:25.522736 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-var-run-calico\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522972 kubelet[2740]: I0911 00:33:25.522755 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddpq\" (UniqueName: \"kubernetes.io/projected/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-kube-api-access-mddpq\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522972 kubelet[2740]: I0911 00:33:25.522772 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-cni-log-dir\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.522972 kubelet[2740]: I0911 00:33:25.522785 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-tigera-ca-bundle\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.523080 kubelet[2740]: I0911 00:33:25.522800 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05a23cd0-dcbc-489b-ab6a-f0687b9f4791-var-lib-calico\") pod \"calico-node-qj774\" (UID: \"05a23cd0-dcbc-489b-ab6a-f0687b9f4791\") " pod="calico-system/calico-node-qj774" Sep 11 00:33:25.541534 containerd[1595]: time="2025-09-11T00:33:25.541496147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b9bddfc8-wptk7,Uid:cc46e1e6-1c81-4b9d-ba34-22451aa18502,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a2f6fce91ba67ad3038e4cf805dac1152ad6acf28dda98f2e2918dce816d46f\"" Sep 11 00:33:25.542386 kubelet[2740]: E0911 00:33:25.542350 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:25.543138 containerd[1595]: time="2025-09-11T00:33:25.543106654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:33:25.629927 kubelet[2740]: E0911 00:33:25.629869 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.629927 kubelet[2740]: W0911 00:33:25.629897 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.629927 kubelet[2740]: E0911 00:33:25.629948 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.636826 kubelet[2740]: E0911 00:33:25.636779 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.636826 kubelet[2740]: W0911 00:33:25.636802 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.636826 kubelet[2740]: E0911 00:33:25.636831 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.743761 kubelet[2740]: E0911 00:33:25.743101 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sqkl" podUID="5c283416-98c3-4e6d-ae4d-ab8f1e80bed3" Sep 11 00:33:25.764969 containerd[1595]: time="2025-09-11T00:33:25.764928639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qj774,Uid:05a23cd0-dcbc-489b-ab6a-f0687b9f4791,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:25.788744 containerd[1595]: time="2025-09-11T00:33:25.788684658Z" level=info msg="connecting to shim 1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf" address="unix:///run/containerd/s/95180410fb225c4bda043e97856aece7732cb47ea47682b6974d0f75b3dd0283" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:25.810472 kubelet[2740]: E0911 00:33:25.810440 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.810472 kubelet[2740]: W0911 00:33:25.810464 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.810619 kubelet[2740]: E0911 00:33:25.810484 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.811038 kubelet[2740]: E0911 00:33:25.811011 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.811038 kubelet[2740]: W0911 00:33:25.811025 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.811038 kubelet[2740]: E0911 00:33:25.811035 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.811269 kubelet[2740]: E0911 00:33:25.811243 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.811269 kubelet[2740]: W0911 00:33:25.811257 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.811269 kubelet[2740]: E0911 00:33:25.811267 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.811509 kubelet[2740]: E0911 00:33:25.811482 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.811509 kubelet[2740]: W0911 00:33:25.811497 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.811509 kubelet[2740]: E0911 00:33:25.811505 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.811708 kubelet[2740]: E0911 00:33:25.811691 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.811708 kubelet[2740]: W0911 00:33:25.811702 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.811708 kubelet[2740]: E0911 00:33:25.811709 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.811910 kubelet[2740]: E0911 00:33:25.811894 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.811910 kubelet[2740]: W0911 00:33:25.811905 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.811910 kubelet[2740]: E0911 00:33:25.811913 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.812092 kubelet[2740]: E0911 00:33:25.812076 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.812092 kubelet[2740]: W0911 00:33:25.812087 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.812141 kubelet[2740]: E0911 00:33:25.812095 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.812278 kubelet[2740]: E0911 00:33:25.812248 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.812278 kubelet[2740]: W0911 00:33:25.812259 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.812278 kubelet[2740]: E0911 00:33:25.812267 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.812481 kubelet[2740]: E0911 00:33:25.812465 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.812481 kubelet[2740]: W0911 00:33:25.812475 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.812481 kubelet[2740]: E0911 00:33:25.812483 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.812664 kubelet[2740]: E0911 00:33:25.812650 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.812664 kubelet[2740]: W0911 00:33:25.812660 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.812711 kubelet[2740]: E0911 00:33:25.812667 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.812861 kubelet[2740]: E0911 00:33:25.812846 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.812861 kubelet[2740]: W0911 00:33:25.812856 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.812914 kubelet[2740]: E0911 00:33:25.812863 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.813035 kubelet[2740]: E0911 00:33:25.813020 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.813035 kubelet[2740]: W0911 00:33:25.813031 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.813075 kubelet[2740]: E0911 00:33:25.813037 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.813220 kubelet[2740]: E0911 00:33:25.813204 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.813220 kubelet[2740]: W0911 00:33:25.813214 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.813274 kubelet[2740]: E0911 00:33:25.813221 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.813422 kubelet[2740]: E0911 00:33:25.813407 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.813422 kubelet[2740]: W0911 00:33:25.813418 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.813464 kubelet[2740]: E0911 00:33:25.813426 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.813606 kubelet[2740]: E0911 00:33:25.813591 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.813606 kubelet[2740]: W0911 00:33:25.813602 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.813660 kubelet[2740]: E0911 00:33:25.813609 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.813776 kubelet[2740]: E0911 00:33:25.813761 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.813776 kubelet[2740]: W0911 00:33:25.813771 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.813831 kubelet[2740]: E0911 00:33:25.813779 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.813973 kubelet[2740]: E0911 00:33:25.813957 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.813973 kubelet[2740]: W0911 00:33:25.813967 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.814031 kubelet[2740]: E0911 00:33:25.813975 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.814152 kubelet[2740]: E0911 00:33:25.814137 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.814152 kubelet[2740]: W0911 00:33:25.814147 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.814195 kubelet[2740]: E0911 00:33:25.814154 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.814705 kubelet[2740]: E0911 00:33:25.814340 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.814705 kubelet[2740]: W0911 00:33:25.814350 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.814705 kubelet[2740]: E0911 00:33:25.814357 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.814705 kubelet[2740]: E0911 00:33:25.814523 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.814705 kubelet[2740]: W0911 00:33:25.814530 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.814705 kubelet[2740]: E0911 00:33:25.814538 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.814530 systemd[1]: Started cri-containerd-1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf.scope - libcontainer container 1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf. Sep 11 00:33:25.825288 kubelet[2740]: E0911 00:33:25.825250 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.825288 kubelet[2740]: W0911 00:33:25.825271 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.825288 kubelet[2740]: E0911 00:33:25.825287 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.825488 kubelet[2740]: I0911 00:33:25.825346 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txs6t\" (UniqueName: \"kubernetes.io/projected/5c283416-98c3-4e6d-ae4d-ab8f1e80bed3-kube-api-access-txs6t\") pod \"csi-node-driver-8sqkl\" (UID: \"5c283416-98c3-4e6d-ae4d-ab8f1e80bed3\") " pod="calico-system/csi-node-driver-8sqkl" Sep 11 00:33:25.825613 kubelet[2740]: E0911 00:33:25.825593 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.825613 kubelet[2740]: W0911 00:33:25.825607 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.826538 kubelet[2740]: E0911 00:33:25.825634 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.826538 kubelet[2740]: I0911 00:33:25.825650 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c283416-98c3-4e6d-ae4d-ab8f1e80bed3-kubelet-dir\") pod \"csi-node-driver-8sqkl\" (UID: \"5c283416-98c3-4e6d-ae4d-ab8f1e80bed3\") " pod="calico-system/csi-node-driver-8sqkl" Sep 11 00:33:25.826538 kubelet[2740]: E0911 00:33:25.825916 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.826538 kubelet[2740]: W0911 00:33:25.825923 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.826538 kubelet[2740]: E0911 00:33:25.825935 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.826538 kubelet[2740]: I0911 00:33:25.825950 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5c283416-98c3-4e6d-ae4d-ab8f1e80bed3-socket-dir\") pod \"csi-node-driver-8sqkl\" (UID: \"5c283416-98c3-4e6d-ae4d-ab8f1e80bed3\") " pod="calico-system/csi-node-driver-8sqkl" Sep 11 00:33:25.826538 kubelet[2740]: E0911 00:33:25.826144 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.826538 kubelet[2740]: W0911 00:33:25.826154 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.826538 kubelet[2740]: E0911 00:33:25.826167 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.826735 kubelet[2740]: I0911 00:33:25.826180 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5c283416-98c3-4e6d-ae4d-ab8f1e80bed3-varrun\") pod \"csi-node-driver-8sqkl\" (UID: \"5c283416-98c3-4e6d-ae4d-ab8f1e80bed3\") " pod="calico-system/csi-node-driver-8sqkl" Sep 11 00:33:25.827187 kubelet[2740]: E0911 00:33:25.826807 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.827187 kubelet[2740]: W0911 00:33:25.826833 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.827187 kubelet[2740]: E0911 00:33:25.826848 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.827187 kubelet[2740]: I0911 00:33:25.826864 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5c283416-98c3-4e6d-ae4d-ab8f1e80bed3-registration-dir\") pod \"csi-node-driver-8sqkl\" (UID: \"5c283416-98c3-4e6d-ae4d-ab8f1e80bed3\") " pod="calico-system/csi-node-driver-8sqkl" Sep 11 00:33:25.827187 kubelet[2740]: E0911 00:33:25.827095 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.827187 kubelet[2740]: W0911 00:33:25.827111 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.827187 kubelet[2740]: E0911 00:33:25.827164 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.827568 kubelet[2740]: E0911 00:33:25.827546 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.827568 kubelet[2740]: W0911 00:33:25.827556 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.827696 kubelet[2740]: E0911 00:33:25.827673 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.827922 kubelet[2740]: E0911 00:33:25.827900 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.827922 kubelet[2740]: W0911 00:33:25.827910 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.828056 kubelet[2740]: E0911 00:33:25.828034 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.828631 kubelet[2740]: E0911 00:33:25.828603 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.828631 kubelet[2740]: W0911 00:33:25.828616 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.828779 kubelet[2740]: E0911 00:33:25.828756 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.829048 kubelet[2740]: E0911 00:33:25.829024 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.829048 kubelet[2740]: W0911 00:33:25.829035 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.829199 kubelet[2740]: E0911 00:33:25.829172 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.829463 kubelet[2740]: E0911 00:33:25.829433 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.829463 kubelet[2740]: W0911 00:33:25.829442 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.829463 kubelet[2740]: E0911 00:33:25.829450 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.829802 kubelet[2740]: E0911 00:33:25.829772 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.829802 kubelet[2740]: W0911 00:33:25.829782 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.829802 kubelet[2740]: E0911 00:33:25.829790 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.830176 kubelet[2740]: E0911 00:33:25.830144 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.830176 kubelet[2740]: W0911 00:33:25.830155 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.830176 kubelet[2740]: E0911 00:33:25.830164 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.830559 kubelet[2740]: E0911 00:33:25.830547 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.830693 kubelet[2740]: W0911 00:33:25.830668 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.830693 kubelet[2740]: E0911 00:33:25.830682 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.830993 kubelet[2740]: E0911 00:33:25.830982 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.831093 kubelet[2740]: W0911 00:33:25.831065 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.831093 kubelet[2740]: E0911 00:33:25.831078 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.848203 containerd[1595]: time="2025-09-11T00:33:25.848135926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qj774,Uid:05a23cd0-dcbc-489b-ab6a-f0687b9f4791,Namespace:calico-system,Attempt:0,} returns sandbox id \"1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf\"" Sep 11 00:33:25.928264 kubelet[2740]: E0911 00:33:25.928148 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.928264 kubelet[2740]: W0911 00:33:25.928169 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.928264 kubelet[2740]: E0911 00:33:25.928187 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.928444 kubelet[2740]: E0911 00:33:25.928418 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.928444 kubelet[2740]: W0911 00:33:25.928427 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.928444 kubelet[2740]: E0911 00:33:25.928440 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.928711 kubelet[2740]: E0911 00:33:25.928621 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.928711 kubelet[2740]: W0911 00:33:25.928634 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.928711 kubelet[2740]: E0911 00:33:25.928697 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.929029 kubelet[2740]: E0911 00:33:25.928965 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.929029 kubelet[2740]: W0911 00:33:25.928991 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.929265 kubelet[2740]: E0911 00:33:25.929073 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.929395 kubelet[2740]: E0911 00:33:25.929360 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.929395 kubelet[2740]: W0911 00:33:25.929385 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.929567 kubelet[2740]: E0911 00:33:25.929417 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.929713 kubelet[2740]: E0911 00:33:25.929653 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.929713 kubelet[2740]: W0911 00:33:25.929709 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.929784 kubelet[2740]: E0911 00:33:25.929732 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.929997 kubelet[2740]: E0911 00:33:25.929981 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.929997 kubelet[2740]: W0911 00:33:25.929993 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.930131 kubelet[2740]: E0911 00:33:25.930070 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.935115 kubelet[2740]: E0911 00:33:25.935079 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.935115 kubelet[2740]: W0911 00:33:25.935104 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.935873 kubelet[2740]: E0911 00:33:25.935807 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.935873 kubelet[2740]: E0911 00:33:25.935846 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.935873 kubelet[2740]: W0911 00:33:25.935856 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.936019 kubelet[2740]: E0911 00:33:25.935901 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.936516 kubelet[2740]: E0911 00:33:25.936485 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.936516 kubelet[2740]: W0911 00:33:25.936498 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.936692 kubelet[2740]: E0911 00:33:25.936662 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.936778 kubelet[2740]: E0911 00:33:25.936763 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.936778 kubelet[2740]: W0911 00:33:25.936774 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.936858 kubelet[2740]: E0911 00:33:25.936832 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.937040 kubelet[2740]: E0911 00:33:25.937025 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.937040 kubelet[2740]: W0911 00:33:25.937036 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.937099 kubelet[2740]: E0911 00:33:25.937067 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.937378 kubelet[2740]: E0911 00:33:25.937360 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.937378 kubelet[2740]: W0911 00:33:25.937376 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.937429 kubelet[2740]: E0911 00:33:25.937395 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.937628 kubelet[2740]: E0911 00:33:25.937612 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.937628 kubelet[2740]: W0911 00:33:25.937624 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.937684 kubelet[2740]: E0911 00:33:25.937656 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.937893 kubelet[2740]: E0911 00:33:25.937869 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.937893 kubelet[2740]: W0911 00:33:25.937881 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.938049 kubelet[2740]: E0911 00:33:25.937921 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.938076 kubelet[2740]: E0911 00:33:25.938059 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.938076 kubelet[2740]: W0911 00:33:25.938067 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.938178 kubelet[2740]: E0911 00:33:25.938136 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.938357 kubelet[2740]: E0911 00:33:25.938342 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.938357 kubelet[2740]: W0911 00:33:25.938353 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.938418 kubelet[2740]: E0911 00:33:25.938402 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.938621 kubelet[2740]: E0911 00:33:25.938604 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.938621 kubelet[2740]: W0911 00:33:25.938614 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.938804 kubelet[2740]: E0911 00:33:25.938748 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.938871 kubelet[2740]: E0911 00:33:25.938844 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.938871 kubelet[2740]: W0911 00:33:25.938853 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.938911 kubelet[2740]: E0911 00:33:25.938880 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.939150 kubelet[2740]: E0911 00:33:25.939126 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.939150 kubelet[2740]: W0911 00:33:25.939143 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.939283 kubelet[2740]: E0911 00:33:25.939265 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.939624 kubelet[2740]: E0911 00:33:25.939608 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.939624 kubelet[2740]: W0911 00:33:25.939620 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.939676 kubelet[2740]: E0911 00:33:25.939634 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.939820 kubelet[2740]: E0911 00:33:25.939794 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.939820 kubelet[2740]: W0911 00:33:25.939806 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.939901 kubelet[2740]: E0911 00:33:25.939830 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.940096 kubelet[2740]: E0911 00:33:25.940076 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.940096 kubelet[2740]: W0911 00:33:25.940089 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.940264 kubelet[2740]: E0911 00:33:25.940134 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.941357 kubelet[2740]: E0911 00:33:25.940397 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.941357 kubelet[2740]: W0911 00:33:25.940409 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.941357 kubelet[2740]: E0911 00:33:25.940419 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.941357 kubelet[2740]: E0911 00:33:25.940934 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.941357 kubelet[2740]: W0911 00:33:25.940943 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.941357 kubelet[2740]: E0911 00:33:25.940952 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.946215 kubelet[2740]: E0911 00:33:25.946188 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:25.946215 kubelet[2740]: W0911 00:33:25.946203 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:25.946215 kubelet[2740]: E0911 00:33:25.946217 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.063669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232585975.mount: Deactivated successfully. Sep 11 00:33:27.391593 containerd[1595]: time="2025-09-11T00:33:27.391540234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:27.395066 containerd[1595]: time="2025-09-11T00:33:27.395007445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:33:27.396045 containerd[1595]: time="2025-09-11T00:33:27.395966092Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:27.397785 containerd[1595]: time="2025-09-11T00:33:27.397739933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:27.398244 containerd[1595]: time="2025-09-11T00:33:27.398195447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.855053436s" Sep 11 00:33:27.398244 containerd[1595]: time="2025-09-11T00:33:27.398241344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:33:27.399090 containerd[1595]: time="2025-09-11T00:33:27.399048463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:33:27.407625 containerd[1595]: time="2025-09-11T00:33:27.407581786Z" level=info msg="CreateContainer within sandbox \"4a2f6fce91ba67ad3038e4cf805dac1152ad6acf28dda98f2e2918dce816d46f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:33:27.414023 containerd[1595]: time="2025-09-11T00:33:27.413975835Z" level=info msg="Container a9c7ff3918a83230c81b8a7eb9d3b88032d923057e4fcc400fa9b4044d9fbcd7: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:27.421948 containerd[1595]: time="2025-09-11T00:33:27.421903139Z" level=info msg="CreateContainer within sandbox \"4a2f6fce91ba67ad3038e4cf805dac1152ad6acf28dda98f2e2918dce816d46f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a9c7ff3918a83230c81b8a7eb9d3b88032d923057e4fcc400fa9b4044d9fbcd7\"" Sep 11 00:33:27.423506 containerd[1595]: time="2025-09-11T00:33:27.423468235Z" level=info msg="StartContainer for \"a9c7ff3918a83230c81b8a7eb9d3b88032d923057e4fcc400fa9b4044d9fbcd7\"" Sep 11 00:33:27.424650 containerd[1595]: time="2025-09-11T00:33:27.424611162Z" level=info msg="connecting to shim a9c7ff3918a83230c81b8a7eb9d3b88032d923057e4fcc400fa9b4044d9fbcd7" address="unix:///run/containerd/s/0a9060c7c052cf527c2ec03a9efd53d7d3fe32e0131b99c6763212bf2e757d2a" protocol=ttrpc version=3 Sep 11 00:33:27.449546 systemd[1]: Started cri-containerd-a9c7ff3918a83230c81b8a7eb9d3b88032d923057e4fcc400fa9b4044d9fbcd7.scope - libcontainer container a9c7ff3918a83230c81b8a7eb9d3b88032d923057e4fcc400fa9b4044d9fbcd7. Sep 11 00:33:27.512736 kubelet[2740]: E0911 00:33:27.512670 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sqkl" podUID="5c283416-98c3-4e6d-ae4d-ab8f1e80bed3" Sep 11 00:33:27.525843 containerd[1595]: time="2025-09-11T00:33:27.525781074Z" level=info msg="StartContainer for \"a9c7ff3918a83230c81b8a7eb9d3b88032d923057e4fcc400fa9b4044d9fbcd7\" returns successfully" Sep 11 00:33:27.562088 kubelet[2740]: E0911 00:33:27.562049 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:27.624262 kubelet[2740]: E0911 00:33:27.624216 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.624262 kubelet[2740]: W0911 00:33:27.624242 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.624262 kubelet[2740]: E0911 00:33:27.624262 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.624612 kubelet[2740]: E0911 00:33:27.624472 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.624612 kubelet[2740]: W0911 00:33:27.624489 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.624612 kubelet[2740]: E0911 00:33:27.624497 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.624710 kubelet[2740]: E0911 00:33:27.624664 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.624710 kubelet[2740]: W0911 00:33:27.624671 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.624710 kubelet[2740]: E0911 00:33:27.624678 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.624988 kubelet[2740]: E0911 00:33:27.624888 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.624988 kubelet[2740]: W0911 00:33:27.624901 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.624988 kubelet[2740]: E0911 00:33:27.624909 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.625251 kubelet[2740]: E0911 00:33:27.625147 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.625251 kubelet[2740]: W0911 00:33:27.625159 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.625251 kubelet[2740]: E0911 00:33:27.625168 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.625391 kubelet[2740]: E0911 00:33:27.625370 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.625391 kubelet[2740]: W0911 00:33:27.625384 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.625481 kubelet[2740]: E0911 00:33:27.625392 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.625689 kubelet[2740]: E0911 00:33:27.625668 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.625689 kubelet[2740]: W0911 00:33:27.625683 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.625689 kubelet[2740]: E0911 00:33:27.625691 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.625957 kubelet[2740]: E0911 00:33:27.625878 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.625957 kubelet[2740]: W0911 00:33:27.625902 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.625957 kubelet[2740]: E0911 00:33:27.625910 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.626146 kubelet[2740]: E0911 00:33:27.626113 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.626146 kubelet[2740]: W0911 00:33:27.626126 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.626146 kubelet[2740]: E0911 00:33:27.626134 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.626427 kubelet[2740]: E0911 00:33:27.626409 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.626427 kubelet[2740]: W0911 00:33:27.626421 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.626490 kubelet[2740]: E0911 00:33:27.626429 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.626627 kubelet[2740]: E0911 00:33:27.626610 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.626627 kubelet[2740]: W0911 00:33:27.626621 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.626627 kubelet[2740]: E0911 00:33:27.626628 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.626840 kubelet[2740]: E0911 00:33:27.626820 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.626840 kubelet[2740]: W0911 00:33:27.626832 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.626840 kubelet[2740]: E0911 00:33:27.626841 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.627091 kubelet[2740]: E0911 00:33:27.627065 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.627091 kubelet[2740]: W0911 00:33:27.627077 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.627091 kubelet[2740]: E0911 00:33:27.627086 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.627268 kubelet[2740]: E0911 00:33:27.627253 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.627268 kubelet[2740]: W0911 00:33:27.627265 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.627359 kubelet[2740]: E0911 00:33:27.627273 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.627472 kubelet[2740]: E0911 00:33:27.627456 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.627472 kubelet[2740]: W0911 00:33:27.627468 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.627520 kubelet[2740]: E0911 00:33:27.627475 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.649544 kubelet[2740]: E0911 00:33:27.647926 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.649544 kubelet[2740]: W0911 00:33:27.647947 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.649544 kubelet[2740]: E0911 00:33:27.647960 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.649544 kubelet[2740]: E0911 00:33:27.648216 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.649544 kubelet[2740]: W0911 00:33:27.648224 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.649544 kubelet[2740]: E0911 00:33:27.648235 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.649544 kubelet[2740]: E0911 00:33:27.648564 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.649544 kubelet[2740]: W0911 00:33:27.648572 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.649544 kubelet[2740]: E0911 00:33:27.648592 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.649544 kubelet[2740]: E0911 00:33:27.648850 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650070 kubelet[2740]: W0911 00:33:27.648861 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650070 kubelet[2740]: E0911 00:33:27.648881 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.650070 kubelet[2740]: E0911 00:33:27.649087 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650070 kubelet[2740]: W0911 00:33:27.649094 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650070 kubelet[2740]: E0911 00:33:27.649113 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.650070 kubelet[2740]: E0911 00:33:27.649282 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650070 kubelet[2740]: W0911 00:33:27.649290 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650070 kubelet[2740]: E0911 00:33:27.649342 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.650070 kubelet[2740]: E0911 00:33:27.649512 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650070 kubelet[2740]: W0911 00:33:27.649519 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650293 kubelet[2740]: E0911 00:33:27.649570 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.650293 kubelet[2740]: E0911 00:33:27.649776 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650293 kubelet[2740]: W0911 00:33:27.649784 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650293 kubelet[2740]: E0911 00:33:27.649871 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.650293 kubelet[2740]: E0911 00:33:27.650004 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650293 kubelet[2740]: W0911 00:33:27.650010 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650293 kubelet[2740]: E0911 00:33:27.650033 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.650570 kubelet[2740]: E0911 00:33:27.650541 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650617 kubelet[2740]: W0911 00:33:27.650600 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650646 kubelet[2740]: E0911 00:33:27.650616 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.650852 kubelet[2740]: E0911 00:33:27.650835 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.650852 kubelet[2740]: W0911 00:33:27.650847 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.650932 kubelet[2740]: E0911 00:33:27.650859 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.651086 kubelet[2740]: E0911 00:33:27.651069 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.651086 kubelet[2740]: W0911 00:33:27.651081 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.651154 kubelet[2740]: E0911 00:33:27.651124 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.653688 kubelet[2740]: E0911 00:33:27.653659 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.653688 kubelet[2740]: W0911 00:33:27.653681 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.658992 kubelet[2740]: E0911 00:33:27.657550 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.658992 kubelet[2740]: W0911 00:33:27.657745 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.658992 kubelet[2740]: E0911 00:33:27.658111 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.658992 kubelet[2740]: E0911 00:33:27.658302 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.658992 kubelet[2740]: E0911 00:33:27.658612 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.658992 kubelet[2740]: W0911 00:33:27.658628 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.658992 kubelet[2740]: E0911 00:33:27.658659 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.658992 kubelet[2740]: E0911 00:33:27.658896 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.658992 kubelet[2740]: W0911 00:33:27.658905 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.658992 kubelet[2740]: E0911 00:33:27.658914 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.659569 kubelet[2740]: E0911 00:33:27.659158 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.659569 kubelet[2740]: W0911 00:33:27.659168 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.659569 kubelet[2740]: E0911 00:33:27.659178 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:27.661956 kubelet[2740]: E0911 00:33:27.661912 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:27.661956 kubelet[2740]: W0911 00:33:27.661947 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:27.662032 kubelet[2740]: E0911 00:33:27.661975 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.564099 kubelet[2740]: I0911 00:33:28.564066 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:33:28.564615 kubelet[2740]: E0911 00:33:28.564430 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:28.632700 kubelet[2740]: E0911 00:33:28.632648 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.632860 kubelet[2740]: W0911 00:33:28.632680 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.632860 kubelet[2740]: E0911 00:33:28.632734 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.633064 kubelet[2740]: E0911 00:33:28.633044 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.633064 kubelet[2740]: W0911 00:33:28.633056 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.633130 kubelet[2740]: E0911 00:33:28.633065 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.633322 kubelet[2740]: E0911 00:33:28.633295 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.633322 kubelet[2740]: W0911 00:33:28.633306 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.633377 kubelet[2740]: E0911 00:33:28.633337 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.633542 kubelet[2740]: E0911 00:33:28.633526 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.633542 kubelet[2740]: W0911 00:33:28.633537 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.633616 kubelet[2740]: E0911 00:33:28.633544 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.633853 kubelet[2740]: E0911 00:33:28.633801 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.633853 kubelet[2740]: W0911 00:33:28.633834 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.633917 kubelet[2740]: E0911 00:33:28.633865 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.634159 kubelet[2740]: E0911 00:33:28.634132 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.634159 kubelet[2740]: W0911 00:33:28.634147 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.634222 kubelet[2740]: E0911 00:33:28.634161 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.634406 kubelet[2740]: E0911 00:33:28.634385 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.634406 kubelet[2740]: W0911 00:33:28.634399 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.634459 kubelet[2740]: E0911 00:33:28.634411 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.634657 kubelet[2740]: E0911 00:33:28.634635 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.634657 kubelet[2740]: W0911 00:33:28.634651 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.634715 kubelet[2740]: E0911 00:33:28.634662 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.635053 kubelet[2740]: E0911 00:33:28.635029 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.635053 kubelet[2740]: W0911 00:33:28.635047 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.635105 kubelet[2740]: E0911 00:33:28.635063 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.635235 kubelet[2740]: E0911 00:33:28.635217 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.635235 kubelet[2740]: W0911 00:33:28.635227 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.635235 kubelet[2740]: E0911 00:33:28.635234 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.635422 kubelet[2740]: E0911 00:33:28.635403 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.635422 kubelet[2740]: W0911 00:33:28.635413 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.635422 kubelet[2740]: E0911 00:33:28.635420 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.635588 kubelet[2740]: E0911 00:33:28.635572 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.635588 kubelet[2740]: W0911 00:33:28.635581 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.635588 kubelet[2740]: E0911 00:33:28.635588 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.635756 kubelet[2740]: E0911 00:33:28.635739 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.635756 kubelet[2740]: W0911 00:33:28.635749 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.635756 kubelet[2740]: E0911 00:33:28.635756 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.635932 kubelet[2740]: E0911 00:33:28.635916 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.635932 kubelet[2740]: W0911 00:33:28.635925 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.635932 kubelet[2740]: E0911 00:33:28.635932 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.636093 kubelet[2740]: E0911 00:33:28.636078 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.636093 kubelet[2740]: W0911 00:33:28.636086 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.636093 kubelet[2740]: E0911 00:33:28.636093 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.655572 kubelet[2740]: E0911 00:33:28.655535 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.655572 kubelet[2740]: W0911 00:33:28.655555 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.655572 kubelet[2740]: E0911 00:33:28.655570 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.655893 kubelet[2740]: E0911 00:33:28.655868 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.655893 kubelet[2740]: W0911 00:33:28.655880 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.655945 kubelet[2740]: E0911 00:33:28.655895 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.656209 kubelet[2740]: E0911 00:33:28.656169 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.656209 kubelet[2740]: W0911 00:33:28.656196 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.656266 kubelet[2740]: E0911 00:33:28.656222 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.656451 kubelet[2740]: E0911 00:33:28.656435 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.656451 kubelet[2740]: W0911 00:33:28.656445 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.656519 kubelet[2740]: E0911 00:33:28.656458 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.656753 kubelet[2740]: E0911 00:33:28.656710 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.656753 kubelet[2740]: W0911 00:33:28.656736 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.656753 kubelet[2740]: E0911 00:33:28.656766 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.657069 kubelet[2740]: E0911 00:33:28.657043 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.657069 kubelet[2740]: W0911 00:33:28.657054 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.657069 kubelet[2740]: E0911 00:33:28.657069 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.657517 kubelet[2740]: E0911 00:33:28.657487 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.657549 kubelet[2740]: W0911 00:33:28.657517 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.657595 kubelet[2740]: E0911 00:33:28.657554 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.657827 kubelet[2740]: E0911 00:33:28.657810 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.657827 kubelet[2740]: W0911 00:33:28.657824 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.657870 kubelet[2740]: E0911 00:33:28.657838 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.658049 kubelet[2740]: E0911 00:33:28.658034 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.658049 kubelet[2740]: W0911 00:33:28.658044 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.658101 kubelet[2740]: E0911 00:33:28.658056 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.658246 kubelet[2740]: E0911 00:33:28.658230 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.658246 kubelet[2740]: W0911 00:33:28.658240 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.658290 kubelet[2740]: E0911 00:33:28.658253 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.658479 kubelet[2740]: E0911 00:33:28.658464 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.658479 kubelet[2740]: W0911 00:33:28.658474 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.658532 kubelet[2740]: E0911 00:33:28.658488 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.658699 kubelet[2740]: E0911 00:33:28.658681 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.658699 kubelet[2740]: W0911 00:33:28.658694 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.658745 kubelet[2740]: E0911 00:33:28.658711 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.658900 kubelet[2740]: E0911 00:33:28.658885 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.658900 kubelet[2740]: W0911 00:33:28.658896 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.658954 kubelet[2740]: E0911 00:33:28.658910 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.659145 kubelet[2740]: E0911 00:33:28.659118 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.659145 kubelet[2740]: W0911 00:33:28.659132 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.659145 kubelet[2740]: E0911 00:33:28.659148 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.659365 kubelet[2740]: E0911 00:33:28.659345 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.659365 kubelet[2740]: W0911 00:33:28.659358 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.659469 kubelet[2740]: E0911 00:33:28.659373 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.659580 kubelet[2740]: E0911 00:33:28.659561 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.659580 kubelet[2740]: W0911 00:33:28.659581 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.659662 kubelet[2740]: E0911 00:33:28.659596 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.659910 kubelet[2740]: E0911 00:33:28.659878 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.659910 kubelet[2740]: W0911 00:33:28.659891 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.659910 kubelet[2740]: E0911 00:33:28.659904 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.660125 kubelet[2740]: E0911 00:33:28.660106 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:28.660125 kubelet[2740]: W0911 00:33:28.660118 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:28.660125 kubelet[2740]: E0911 00:33:28.660128 2740 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:28.857769 containerd[1595]: time="2025-09-11T00:33:28.857636866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:28.858831 containerd[1595]: time="2025-09-11T00:33:28.858762007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:33:28.860626 containerd[1595]: time="2025-09-11T00:33:28.860598605Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:28.862860 containerd[1595]: time="2025-09-11T00:33:28.862808280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:28.863227 containerd[1595]: time="2025-09-11T00:33:28.863191797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.464112504s" Sep 11 00:33:28.863273 containerd[1595]: time="2025-09-11T00:33:28.863226131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:33:28.865122 containerd[1595]: time="2025-09-11T00:33:28.865083439Z" level=info msg="CreateContainer within sandbox \"1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:33:28.875756 containerd[1595]: time="2025-09-11T00:33:28.875710026Z" level=info msg="Container f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:28.888372 containerd[1595]: time="2025-09-11T00:33:28.888294973Z" level=info msg="CreateContainer within sandbox \"1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e\"" Sep 11 00:33:28.889026 containerd[1595]: time="2025-09-11T00:33:28.888914886Z" level=info msg="StartContainer for \"f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e\"" Sep 11 00:33:28.890428 containerd[1595]: time="2025-09-11T00:33:28.890396102Z" level=info msg="connecting to shim f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e" address="unix:///run/containerd/s/95180410fb225c4bda043e97856aece7732cb47ea47682b6974d0f75b3dd0283" protocol=ttrpc version=3 Sep 11 00:33:28.919561 systemd[1]: Started cri-containerd-f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e.scope - libcontainer container f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e. Sep 11 00:33:28.961654 containerd[1595]: time="2025-09-11T00:33:28.961605172Z" level=info msg="StartContainer for \"f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e\" returns successfully" Sep 11 00:33:28.970476 systemd[1]: cri-containerd-f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e.scope: Deactivated successfully. Sep 11 00:33:28.973866 containerd[1595]: time="2025-09-11T00:33:28.973795310Z" level=info msg="received exit event container_id:\"f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e\" id:\"f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e\" pid:3470 exited_at:{seconds:1757550808 nanos:973247132}" Sep 11 00:33:28.973959 containerd[1595]: time="2025-09-11T00:33:28.973874971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e\" id:\"f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e\" pid:3470 exited_at:{seconds:1757550808 nanos:973247132}" Sep 11 00:33:28.997988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f66b81bebf7237b71a966a3fceed88517f87331acb6706940d2e2a18ba6c8d6e-rootfs.mount: Deactivated successfully. Sep 11 00:33:29.512940 kubelet[2740]: E0911 00:33:29.512886 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sqkl" podUID="5c283416-98c3-4e6d-ae4d-ab8f1e80bed3" Sep 11 00:33:29.569121 containerd[1595]: time="2025-09-11T00:33:29.569061134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:33:29.583688 kubelet[2740]: I0911 00:33:29.583611 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b9bddfc8-wptk7" podStartSLOduration=3.727506646 podStartE2EDuration="5.583589504s" podCreationTimestamp="2025-09-11 00:33:24 +0000 UTC" firstStartedPulling="2025-09-11 00:33:25.542867169 +0000 UTC m=+18.112574207" lastFinishedPulling="2025-09-11 00:33:27.398950027 +0000 UTC m=+19.968657065" observedRunningTime="2025-09-11 00:33:27.575044981 +0000 UTC m=+20.144752019" watchObservedRunningTime="2025-09-11 00:33:29.583589504 +0000 UTC m=+22.153296562" Sep 11 00:33:31.512679 kubelet[2740]: E0911 00:33:31.512632 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sqkl" podUID="5c283416-98c3-4e6d-ae4d-ab8f1e80bed3" Sep 11 00:33:33.513569 kubelet[2740]: E0911 00:33:33.512516 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sqkl" podUID="5c283416-98c3-4e6d-ae4d-ab8f1e80bed3" Sep 11 00:33:34.262034 containerd[1595]: time="2025-09-11T00:33:34.261977937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:34.263079 containerd[1595]: time="2025-09-11T00:33:34.263047947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:33:34.264447 containerd[1595]: time="2025-09-11T00:33:34.264388919Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:34.266277 containerd[1595]: time="2025-09-11T00:33:34.266240004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:34.266843 containerd[1595]: time="2025-09-11T00:33:34.266819337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.697723368s" Sep 11 00:33:34.266883 containerd[1595]: time="2025-09-11T00:33:34.266845107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:33:34.268749 containerd[1595]: time="2025-09-11T00:33:34.268707212Z" level=info msg="CreateContainer within sandbox \"1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:33:34.282618 containerd[1595]: time="2025-09-11T00:33:34.282559626Z" level=info msg="Container f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:34.292925 containerd[1595]: time="2025-09-11T00:33:34.292878167Z" level=info msg="CreateContainer within sandbox \"1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64\"" Sep 11 00:33:34.293485 containerd[1595]: time="2025-09-11T00:33:34.293440349Z" level=info msg="StartContainer for \"f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64\"" Sep 11 00:33:34.294936 containerd[1595]: time="2025-09-11T00:33:34.294911807Z" level=info msg="connecting to shim f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64" address="unix:///run/containerd/s/95180410fb225c4bda043e97856aece7732cb47ea47682b6974d0f75b3dd0283" protocol=ttrpc version=3 Sep 11 00:33:34.324511 systemd[1]: Started cri-containerd-f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64.scope - libcontainer container f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64. Sep 11 00:33:34.554528 containerd[1595]: time="2025-09-11T00:33:34.554397891Z" level=info msg="StartContainer for \"f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64\" returns successfully" Sep 11 00:33:35.512540 kubelet[2740]: E0911 00:33:35.512472 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sqkl" podUID="5c283416-98c3-4e6d-ae4d-ab8f1e80bed3" Sep 11 00:33:35.948501 containerd[1595]: time="2025-09-11T00:33:35.948445898Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:33:35.952242 systemd[1]: cri-containerd-f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64.scope: Deactivated successfully. Sep 11 00:33:35.952664 systemd[1]: cri-containerd-f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64.scope: Consumed 655ms CPU time, 179.2M memory peak, 1.6M read from disk, 171.3M written to disk. Sep 11 00:33:35.954020 containerd[1595]: time="2025-09-11T00:33:35.953964924Z" level=info msg="received exit event container_id:\"f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64\" id:\"f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64\" pid:3529 exited_at:{seconds:1757550815 nanos:953624191}" Sep 11 00:33:35.954573 containerd[1595]: time="2025-09-11T00:33:35.954517276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64\" id:\"f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64\" pid:3529 exited_at:{seconds:1757550815 nanos:953624191}" Sep 11 00:33:35.970725 kubelet[2740]: I0911 00:33:35.970647 2740 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 11 00:33:35.984223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f179837c4a2d53d14676a20c778707ff0d931da8cde6673b7b3d35be69995e64-rootfs.mount: Deactivated successfully. Sep 11 00:33:36.001667 kubelet[2740]: W0911 00:33:36.001532 2740 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 11 00:33:36.001667 kubelet[2740]: E0911 00:33:36.001608 2740 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 11 00:33:36.005997 kubelet[2740]: W0911 00:33:36.002073 2740 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 11 00:33:36.005997 kubelet[2740]: E0911 00:33:36.002092 2740 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 11 00:33:36.010041 systemd[1]: Created slice kubepods-besteffort-pod909a099a_bf32_407d_91d7_f0625e04aba7.slice - libcontainer container kubepods-besteffort-pod909a099a_bf32_407d_91d7_f0625e04aba7.slice. Sep 11 00:33:36.142837 systemd[1]: Created slice kubepods-besteffort-pod32bf9bd0_e3e1_4ec6_a60e_bd5a84032d24.slice - libcontainer container kubepods-besteffort-pod32bf9bd0_e3e1_4ec6_a60e_bd5a84032d24.slice. Sep 11 00:33:36.152028 systemd[1]: Created slice kubepods-besteffort-podab3ef2d8_99ba_4556_81b2_7b7f1b60d6c6.slice - libcontainer container kubepods-besteffort-podab3ef2d8_99ba_4556_81b2_7b7f1b60d6c6.slice. Sep 11 00:33:36.160560 systemd[1]: Created slice kubepods-burstable-pod65546ac2_cdf2_4628_acd4_59736de8e8fe.slice - libcontainer container kubepods-burstable-pod65546ac2_cdf2_4628_acd4_59736de8e8fe.slice. Sep 11 00:33:36.169424 systemd[1]: Created slice kubepods-besteffort-pod73ab4f26_4242_414c_84be_d112bb35aa1e.slice - libcontainer container kubepods-besteffort-pod73ab4f26_4242_414c_84be_d112bb35aa1e.slice. Sep 11 00:33:36.177613 systemd[1]: Created slice kubepods-burstable-pod34335868_9f4e_46e3_a71b_58fb68457ce3.slice - libcontainer container kubepods-burstable-pod34335868_9f4e_46e3_a71b_58fb68457ce3.slice. Sep 11 00:33:36.184498 systemd[1]: Created slice kubepods-besteffort-poda1e2438f_98aa_403b_a122_a27d7015cc11.slice - libcontainer container kubepods-besteffort-poda1e2438f_98aa_403b_a122_a27d7015cc11.slice. Sep 11 00:33:36.224926 kubelet[2740]: I0911 00:33:36.224748 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24-goldmane-key-pair\") pod \"goldmane-7988f88666-4x68b\" (UID: \"32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24\") " pod="calico-system/goldmane-7988f88666-4x68b" Sep 11 00:33:36.224926 kubelet[2740]: I0911 00:33:36.224799 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hw6m\" (UniqueName: \"kubernetes.io/projected/32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24-kube-api-access-5hw6m\") pod \"goldmane-7988f88666-4x68b\" (UID: \"32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24\") " pod="calico-system/goldmane-7988f88666-4x68b" Sep 11 00:33:36.224926 kubelet[2740]: I0911 00:33:36.224819 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24-config\") pod \"goldmane-7988f88666-4x68b\" (UID: \"32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24\") " pod="calico-system/goldmane-7988f88666-4x68b" Sep 11 00:33:36.224926 kubelet[2740]: I0911 00:33:36.224838 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-ca-bundle\") pod \"whisker-7cbc78d757-qnjlg\" (UID: \"909a099a-bf32-407d-91d7-f0625e04aba7\") " pod="calico-system/whisker-7cbc78d757-qnjlg" Sep 11 00:33:36.224926 kubelet[2740]: I0911 00:33:36.224856 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24-goldmane-ca-bundle\") pod \"goldmane-7988f88666-4x68b\" (UID: \"32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24\") " pod="calico-system/goldmane-7988f88666-4x68b" Sep 11 00:33:36.225243 kubelet[2740]: I0911 00:33:36.224869 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6-calico-apiserver-certs\") pod \"calico-apiserver-56d8d46d86-tb7cb\" (UID: \"ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6\") " pod="calico-apiserver/calico-apiserver-56d8d46d86-tb7cb" Sep 11 00:33:36.225243 kubelet[2740]: I0911 00:33:36.224885 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qfd\" (UniqueName: \"kubernetes.io/projected/ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6-kube-api-access-57qfd\") pod \"calico-apiserver-56d8d46d86-tb7cb\" (UID: \"ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6\") " pod="calico-apiserver/calico-apiserver-56d8d46d86-tb7cb" Sep 11 00:33:36.225243 kubelet[2740]: I0911 00:33:36.224899 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5dg\" (UniqueName: \"kubernetes.io/projected/909a099a-bf32-407d-91d7-f0625e04aba7-kube-api-access-6z5dg\") pod \"whisker-7cbc78d757-qnjlg\" (UID: \"909a099a-bf32-407d-91d7-f0625e04aba7\") " pod="calico-system/whisker-7cbc78d757-qnjlg" Sep 11 00:33:36.225243 kubelet[2740]: I0911 00:33:36.224961 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-backend-key-pair\") pod \"whisker-7cbc78d757-qnjlg\" (UID: \"909a099a-bf32-407d-91d7-f0625e04aba7\") " pod="calico-system/whisker-7cbc78d757-qnjlg" Sep 11 00:33:36.325730 kubelet[2740]: I0911 00:33:36.325632 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73ab4f26-4242-414c-84be-d112bb35aa1e-calico-apiserver-certs\") pod \"calico-apiserver-56d8d46d86-xt8mj\" (UID: \"73ab4f26-4242-414c-84be-d112bb35aa1e\") " pod="calico-apiserver/calico-apiserver-56d8d46d86-xt8mj" Sep 11 00:33:36.325730 kubelet[2740]: I0911 00:33:36.325714 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfndx\" (UniqueName: \"kubernetes.io/projected/73ab4f26-4242-414c-84be-d112bb35aa1e-kube-api-access-zfndx\") pod \"calico-apiserver-56d8d46d86-xt8mj\" (UID: \"73ab4f26-4242-414c-84be-d112bb35aa1e\") " pod="calico-apiserver/calico-apiserver-56d8d46d86-xt8mj" Sep 11 00:33:36.326002 kubelet[2740]: I0911 00:33:36.325798 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65546ac2-cdf2-4628-acd4-59736de8e8fe-config-volume\") pod \"coredns-7c65d6cfc9-vqsdg\" (UID: \"65546ac2-cdf2-4628-acd4-59736de8e8fe\") " pod="kube-system/coredns-7c65d6cfc9-vqsdg" Sep 11 00:33:36.326002 kubelet[2740]: I0911 00:33:36.325862 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmbc\" (UniqueName: \"kubernetes.io/projected/65546ac2-cdf2-4628-acd4-59736de8e8fe-kube-api-access-xhmbc\") pod \"coredns-7c65d6cfc9-vqsdg\" (UID: \"65546ac2-cdf2-4628-acd4-59736de8e8fe\") " pod="kube-system/coredns-7c65d6cfc9-vqsdg" Sep 11 00:33:36.326002 kubelet[2740]: I0911 00:33:36.325908 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1e2438f-98aa-403b-a122-a27d7015cc11-tigera-ca-bundle\") pod \"calico-kube-controllers-5d66f559c5-sxjj5\" (UID: \"a1e2438f-98aa-403b-a122-a27d7015cc11\") " pod="calico-system/calico-kube-controllers-5d66f559c5-sxjj5" Sep 11 00:33:36.326002 kubelet[2740]: I0911 00:33:36.325969 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5ndf\" (UniqueName: \"kubernetes.io/projected/a1e2438f-98aa-403b-a122-a27d7015cc11-kube-api-access-j5ndf\") pod \"calico-kube-controllers-5d66f559c5-sxjj5\" (UID: \"a1e2438f-98aa-403b-a122-a27d7015cc11\") " pod="calico-system/calico-kube-controllers-5d66f559c5-sxjj5" Sep 11 00:33:36.326100 kubelet[2740]: I0911 00:33:36.326010 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34335868-9f4e-46e3-a71b-58fb68457ce3-config-volume\") pod \"coredns-7c65d6cfc9-nvmvc\" (UID: \"34335868-9f4e-46e3-a71b-58fb68457ce3\") " pod="kube-system/coredns-7c65d6cfc9-nvmvc" Sep 11 00:33:36.326100 kubelet[2740]: I0911 00:33:36.326035 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx9qb\" (UniqueName: \"kubernetes.io/projected/34335868-9f4e-46e3-a71b-58fb68457ce3-kube-api-access-tx9qb\") pod \"coredns-7c65d6cfc9-nvmvc\" (UID: \"34335868-9f4e-46e3-a71b-58fb68457ce3\") " pod="kube-system/coredns-7c65d6cfc9-nvmvc" Sep 11 00:33:36.448195 containerd[1595]: time="2025-09-11T00:33:36.448149410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4x68b,Uid:32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:36.457495 containerd[1595]: time="2025-09-11T00:33:36.457431431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-tb7cb,Uid:ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:36.466805 kubelet[2740]: E0911 00:33:36.466768 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:36.467272 containerd[1595]: time="2025-09-11T00:33:36.467235606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqsdg,Uid:65546ac2-cdf2-4628-acd4-59736de8e8fe,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:36.473883 containerd[1595]: time="2025-09-11T00:33:36.473746259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-xt8mj,Uid:73ab4f26-4242-414c-84be-d112bb35aa1e,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:36.481630 kubelet[2740]: E0911 00:33:36.481496 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:36.485073 containerd[1595]: time="2025-09-11T00:33:36.485030034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nvmvc,Uid:34335868-9f4e-46e3-a71b-58fb68457ce3,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:36.489527 containerd[1595]: time="2025-09-11T00:33:36.489475020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d66f559c5-sxjj5,Uid:a1e2438f-98aa-403b-a122-a27d7015cc11,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:36.559658 containerd[1595]: time="2025-09-11T00:33:36.559580644Z" level=error msg="Failed to destroy network for sandbox \"3ffae30261efc79cae15044da3e92b211f854014a2ec3e304f43a3b25f6e510e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.571389 containerd[1595]: time="2025-09-11T00:33:36.571278111Z" level=error msg="Failed to destroy network for sandbox \"ce0798d92dd498189dbc719593c580cedc657aed6a179982edf3ffb9650c4f1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.572414 containerd[1595]: time="2025-09-11T00:33:36.572386892Z" level=error msg="Failed to destroy network for sandbox \"65263e20a21a548c0d399b8e87ebf766e468ce4713975e81f66f0eb55cf624cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.591080 containerd[1595]: time="2025-09-11T00:33:36.590925275Z" level=error msg="Failed to destroy network for sandbox \"51f6bd374e47429b397931a2cdfae7077feeefca68e3145a61cfbc9da5d6ec9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.591763 containerd[1595]: time="2025-09-11T00:33:36.591718000Z" level=error msg="Failed to destroy network for sandbox \"7f9982b7d2eac1866bdafbf15fa2b419890e1c21139389156daf4c6f87d32b73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.594515 containerd[1595]: time="2025-09-11T00:33:36.594429595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqsdg,Uid:65546ac2-cdf2-4628-acd4-59736de8e8fe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffae30261efc79cae15044da3e92b211f854014a2ec3e304f43a3b25f6e510e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.595126 containerd[1595]: time="2025-09-11T00:33:36.595064343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4x68b,Uid:32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f9982b7d2eac1866bdafbf15fa2b419890e1c21139389156daf4c6f87d32b73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.595270 containerd[1595]: time="2025-09-11T00:33:36.595242148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nvmvc,Uid:34335868-9f4e-46e3-a71b-58fb68457ce3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0798d92dd498189dbc719593c580cedc657aed6a179982edf3ffb9650c4f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.595473 containerd[1595]: time="2025-09-11T00:33:36.595444971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d66f559c5-sxjj5,Uid:a1e2438f-98aa-403b-a122-a27d7015cc11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65263e20a21a548c0d399b8e87ebf766e468ce4713975e81f66f0eb55cf624cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.596466 containerd[1595]: time="2025-09-11T00:33:36.596368253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-tb7cb,Uid:ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f6bd374e47429b397931a2cdfae7077feeefca68e3145a61cfbc9da5d6ec9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.602909 containerd[1595]: time="2025-09-11T00:33:36.602830794Z" level=error msg="Failed to destroy network for sandbox \"c91931bdaf94f9ceeec7367a0303f3ae4ade66d29e036ce290be5f0657415764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.604227 containerd[1595]: time="2025-09-11T00:33:36.604115237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-xt8mj,Uid:73ab4f26-4242-414c-84be-d112bb35aa1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91931bdaf94f9ceeec7367a0303f3ae4ade66d29e036ce290be5f0657415764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.608354 kubelet[2740]: E0911 00:33:36.608277 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91931bdaf94f9ceeec7367a0303f3ae4ade66d29e036ce290be5f0657415764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.608811 kubelet[2740]: E0911 00:33:36.608361 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65263e20a21a548c0d399b8e87ebf766e468ce4713975e81f66f0eb55cf624cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.608811 kubelet[2740]: E0911 00:33:36.608376 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f6bd374e47429b397931a2cdfae7077feeefca68e3145a61cfbc9da5d6ec9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.608811 kubelet[2740]: E0911 00:33:36.608397 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65263e20a21a548c0d399b8e87ebf766e468ce4713975e81f66f0eb55cf624cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d66f559c5-sxjj5" Sep 11 00:33:36.608811 kubelet[2740]: E0911 00:33:36.608417 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65263e20a21a548c0d399b8e87ebf766e468ce4713975e81f66f0eb55cf624cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d66f559c5-sxjj5" Sep 11 00:33:36.608942 kubelet[2740]: E0911 00:33:36.608427 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f6bd374e47429b397931a2cdfae7077feeefca68e3145a61cfbc9da5d6ec9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d8d46d86-tb7cb" Sep 11 00:33:36.608942 kubelet[2740]: E0911 00:33:36.608447 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f6bd374e47429b397931a2cdfae7077feeefca68e3145a61cfbc9da5d6ec9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d8d46d86-tb7cb" Sep 11 00:33:36.608942 kubelet[2740]: E0911 00:33:36.608381 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91931bdaf94f9ceeec7367a0303f3ae4ade66d29e036ce290be5f0657415764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d8d46d86-xt8mj" Sep 11 00:33:36.608942 kubelet[2740]: E0911 00:33:36.608480 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c91931bdaf94f9ceeec7367a0303f3ae4ade66d29e036ce290be5f0657415764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d8d46d86-xt8mj" Sep 11 00:33:36.609070 kubelet[2740]: E0911 00:33:36.608901 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56d8d46d86-tb7cb_calico-apiserver(ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56d8d46d86-tb7cb_calico-apiserver(ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51f6bd374e47429b397931a2cdfae7077feeefca68e3145a61cfbc9da5d6ec9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d8d46d86-tb7cb" podUID="ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6" Sep 11 00:33:36.609070 kubelet[2740]: E0911 00:33:36.608922 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56d8d46d86-xt8mj_calico-apiserver(73ab4f26-4242-414c-84be-d112bb35aa1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56d8d46d86-xt8mj_calico-apiserver(73ab4f26-4242-414c-84be-d112bb35aa1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c91931bdaf94f9ceeec7367a0303f3ae4ade66d29e036ce290be5f0657415764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d8d46d86-xt8mj" podUID="73ab4f26-4242-414c-84be-d112bb35aa1e" Sep 11 00:33:36.609070 kubelet[2740]: E0911 00:33:36.608291 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f9982b7d2eac1866bdafbf15fa2b419890e1c21139389156daf4c6f87d32b73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.609214 kubelet[2740]: E0911 00:33:36.608985 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f9982b7d2eac1866bdafbf15fa2b419890e1c21139389156daf4c6f87d32b73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-4x68b" Sep 11 00:33:36.609214 kubelet[2740]: E0911 00:33:36.608997 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f9982b7d2eac1866bdafbf15fa2b419890e1c21139389156daf4c6f87d32b73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-4x68b" Sep 11 00:33:36.609214 kubelet[2740]: E0911 00:33:36.609018 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-4x68b_calico-system(32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-4x68b_calico-system(32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f9982b7d2eac1866bdafbf15fa2b419890e1c21139389156daf4c6f87d32b73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-4x68b" podUID="32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24" Sep 11 00:33:36.609374 kubelet[2740]: E0911 00:33:36.608293 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0798d92dd498189dbc719593c580cedc657aed6a179982edf3ffb9650c4f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.609374 kubelet[2740]: E0911 00:33:36.609058 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0798d92dd498189dbc719593c580cedc657aed6a179982edf3ffb9650c4f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nvmvc" Sep 11 00:33:36.609374 kubelet[2740]: E0911 00:33:36.609069 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0798d92dd498189dbc719593c580cedc657aed6a179982edf3ffb9650c4f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nvmvc" Sep 11 00:33:36.610389 kubelet[2740]: E0911 00:33:36.609090 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nvmvc_kube-system(34335868-9f4e-46e3-a71b-58fb68457ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nvmvc_kube-system(34335868-9f4e-46e3-a71b-58fb68457ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce0798d92dd498189dbc719593c580cedc657aed6a179982edf3ffb9650c4f1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nvmvc" podUID="34335868-9f4e-46e3-a71b-58fb68457ce3" Sep 11 00:33:36.610389 kubelet[2740]: E0911 00:33:36.609164 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d66f559c5-sxjj5_calico-system(a1e2438f-98aa-403b-a122-a27d7015cc11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d66f559c5-sxjj5_calico-system(a1e2438f-98aa-403b-a122-a27d7015cc11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65263e20a21a548c0d399b8e87ebf766e468ce4713975e81f66f0eb55cf624cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d66f559c5-sxjj5" podUID="a1e2438f-98aa-403b-a122-a27d7015cc11" Sep 11 00:33:36.610389 kubelet[2740]: E0911 00:33:36.608287 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffae30261efc79cae15044da3e92b211f854014a2ec3e304f43a3b25f6e510e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:36.610688 kubelet[2740]: E0911 00:33:36.610191 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffae30261efc79cae15044da3e92b211f854014a2ec3e304f43a3b25f6e510e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vqsdg" Sep 11 00:33:36.610688 kubelet[2740]: E0911 00:33:36.610219 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffae30261efc79cae15044da3e92b211f854014a2ec3e304f43a3b25f6e510e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vqsdg" Sep 11 00:33:36.610688 kubelet[2740]: E0911 00:33:36.610280 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vqsdg_kube-system(65546ac2-cdf2-4628-acd4-59736de8e8fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vqsdg_kube-system(65546ac2-cdf2-4628-acd4-59736de8e8fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ffae30261efc79cae15044da3e92b211f854014a2ec3e304f43a3b25f6e510e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vqsdg" podUID="65546ac2-cdf2-4628-acd4-59736de8e8fe" Sep 11 00:33:36.612821 containerd[1595]: time="2025-09-11T00:33:36.612706885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:33:37.326833 kubelet[2740]: E0911 00:33:37.326773 2740 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 11 00:33:37.327055 kubelet[2740]: E0911 00:33:37.326876 2740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-ca-bundle podName:909a099a-bf32-407d-91d7-f0625e04aba7 nodeName:}" failed. No retries permitted until 2025-09-11 00:33:37.826857082 +0000 UTC m=+30.396564120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-ca-bundle") pod "whisker-7cbc78d757-qnjlg" (UID: "909a099a-bf32-407d-91d7-f0625e04aba7") : failed to sync configmap cache: timed out waiting for the condition Sep 11 00:33:37.518833 systemd[1]: Created slice kubepods-besteffort-pod5c283416_98c3_4e6d_ae4d_ab8f1e80bed3.slice - libcontainer container kubepods-besteffort-pod5c283416_98c3_4e6d_ae4d_ab8f1e80bed3.slice. Sep 11 00:33:37.521385 containerd[1595]: time="2025-09-11T00:33:37.521308206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sqkl,Uid:5c283416-98c3-4e6d-ae4d-ab8f1e80bed3,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:37.576474 containerd[1595]: time="2025-09-11T00:33:37.576413272Z" level=error msg="Failed to destroy network for sandbox \"dd729be9af01d7e44b157673e5ad4928b16e11d7f6321ca6f290456d76274c6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:37.579393 containerd[1595]: time="2025-09-11T00:33:37.578017768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sqkl,Uid:5c283416-98c3-4e6d-ae4d-ab8f1e80bed3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd729be9af01d7e44b157673e5ad4928b16e11d7f6321ca6f290456d76274c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:37.578851 systemd[1]: run-netns-cni\x2d80a9c0fc\x2d404a\x2d87d9\x2d95eb\x2dad04ab70bb62.mount: Deactivated successfully. Sep 11 00:33:37.579603 kubelet[2740]: E0911 00:33:37.578237 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd729be9af01d7e44b157673e5ad4928b16e11d7f6321ca6f290456d76274c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:37.579603 kubelet[2740]: E0911 00:33:37.578295 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd729be9af01d7e44b157673e5ad4928b16e11d7f6321ca6f290456d76274c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sqkl" Sep 11 00:33:37.579603 kubelet[2740]: E0911 00:33:37.578379 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd729be9af01d7e44b157673e5ad4928b16e11d7f6321ca6f290456d76274c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sqkl" Sep 11 00:33:37.579714 kubelet[2740]: E0911 00:33:37.578423 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8sqkl_calico-system(5c283416-98c3-4e6d-ae4d-ab8f1e80bed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8sqkl_calico-system(5c283416-98c3-4e6d-ae4d-ab8f1e80bed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd729be9af01d7e44b157673e5ad4928b16e11d7f6321ca6f290456d76274c6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8sqkl" podUID="5c283416-98c3-4e6d-ae4d-ab8f1e80bed3" Sep 11 00:33:37.932557 containerd[1595]: time="2025-09-11T00:33:37.932510056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cbc78d757-qnjlg,Uid:909a099a-bf32-407d-91d7-f0625e04aba7,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:37.991968 containerd[1595]: time="2025-09-11T00:33:37.991907477Z" level=error msg="Failed to destroy network for sandbox \"0fee02b2073e51d699dc032c8f08e59bf26e0a2c18d544931928816dbe2de73c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:37.993819 containerd[1595]: time="2025-09-11T00:33:37.993690220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cbc78d757-qnjlg,Uid:909a099a-bf32-407d-91d7-f0625e04aba7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fee02b2073e51d699dc032c8f08e59bf26e0a2c18d544931928816dbe2de73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:37.994113 kubelet[2740]: E0911 00:33:37.994044 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fee02b2073e51d699dc032c8f08e59bf26e0a2c18d544931928816dbe2de73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:37.994594 kubelet[2740]: E0911 00:33:37.994121 2740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fee02b2073e51d699dc032c8f08e59bf26e0a2c18d544931928816dbe2de73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cbc78d757-qnjlg" Sep 11 00:33:37.994594 kubelet[2740]: E0911 00:33:37.994148 2740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fee02b2073e51d699dc032c8f08e59bf26e0a2c18d544931928816dbe2de73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cbc78d757-qnjlg" Sep 11 00:33:37.994594 kubelet[2740]: E0911 00:33:37.994197 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cbc78d757-qnjlg_calico-system(909a099a-bf32-407d-91d7-f0625e04aba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cbc78d757-qnjlg_calico-system(909a099a-bf32-407d-91d7-f0625e04aba7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fee02b2073e51d699dc032c8f08e59bf26e0a2c18d544931928816dbe2de73c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cbc78d757-qnjlg" podUID="909a099a-bf32-407d-91d7-f0625e04aba7" Sep 11 00:33:37.994559 systemd[1]: run-netns-cni\x2d420c7d43\x2dab9d\x2df912\x2d2516\x2d4cb26b5304aa.mount: Deactivated successfully. Sep 11 00:33:40.995709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830796677.mount: Deactivated successfully. Sep 11 00:33:41.788698 containerd[1595]: time="2025-09-11T00:33:41.788648117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:41.816483 containerd[1595]: time="2025-09-11T00:33:41.816433179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:33:41.817843 containerd[1595]: time="2025-09-11T00:33:41.817808569Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:41.819851 containerd[1595]: time="2025-09-11T00:33:41.819816280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:41.820325 containerd[1595]: time="2025-09-11T00:33:41.820284723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.207533323s" Sep 11 00:33:41.820356 containerd[1595]: time="2025-09-11T00:33:41.820346980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:33:41.830138 containerd[1595]: time="2025-09-11T00:33:41.830091874Z" level=info msg="CreateContainer within sandbox \"1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:33:41.850333 containerd[1595]: time="2025-09-11T00:33:41.850283883Z" level=info msg="Container cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:41.860817 containerd[1595]: time="2025-09-11T00:33:41.860773951Z" level=info msg="CreateContainer within sandbox \"1da1f9e9b15271f5643daf9657647899b628db86b15ae629da5ae5ad1e1b9dcf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179\"" Sep 11 00:33:41.861301 containerd[1595]: time="2025-09-11T00:33:41.861275205Z" level=info msg="StartContainer for \"cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179\"" Sep 11 00:33:41.862603 containerd[1595]: time="2025-09-11T00:33:41.862579751Z" level=info msg="connecting to shim cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179" address="unix:///run/containerd/s/95180410fb225c4bda043e97856aece7732cb47ea47682b6974d0f75b3dd0283" protocol=ttrpc version=3 Sep 11 00:33:41.890451 systemd[1]: Started cri-containerd-cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179.scope - libcontainer container cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179. Sep 11 00:33:41.939215 containerd[1595]: time="2025-09-11T00:33:41.939165966Z" level=info msg="StartContainer for \"cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179\" returns successfully" Sep 11 00:33:42.015376 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:33:42.015497 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:33:42.162678 kubelet[2740]: I0911 00:33:42.162628 2740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z5dg\" (UniqueName: \"kubernetes.io/projected/909a099a-bf32-407d-91d7-f0625e04aba7-kube-api-access-6z5dg\") pod \"909a099a-bf32-407d-91d7-f0625e04aba7\" (UID: \"909a099a-bf32-407d-91d7-f0625e04aba7\") " Sep 11 00:33:42.163118 kubelet[2740]: I0911 00:33:42.163048 2740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-ca-bundle\") pod \"909a099a-bf32-407d-91d7-f0625e04aba7\" (UID: \"909a099a-bf32-407d-91d7-f0625e04aba7\") " Sep 11 00:33:42.163118 kubelet[2740]: I0911 00:33:42.163070 2740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-backend-key-pair\") pod \"909a099a-bf32-407d-91d7-f0625e04aba7\" (UID: \"909a099a-bf32-407d-91d7-f0625e04aba7\") " Sep 11 00:33:42.163706 kubelet[2740]: I0911 00:33:42.163671 2740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "909a099a-bf32-407d-91d7-f0625e04aba7" (UID: "909a099a-bf32-407d-91d7-f0625e04aba7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 00:33:42.168670 systemd[1]: var-lib-kubelet-pods-909a099a\x2dbf32\x2d407d\x2d91d7\x2df0625e04aba7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6z5dg.mount: Deactivated successfully. Sep 11 00:33:42.170929 kubelet[2740]: I0911 00:33:42.170883 2740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "909a099a-bf32-407d-91d7-f0625e04aba7" (UID: "909a099a-bf32-407d-91d7-f0625e04aba7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 11 00:33:42.171001 kubelet[2740]: I0911 00:33:42.170950 2740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909a099a-bf32-407d-91d7-f0625e04aba7-kube-api-access-6z5dg" (OuterVolumeSpecName: "kube-api-access-6z5dg") pod "909a099a-bf32-407d-91d7-f0625e04aba7" (UID: "909a099a-bf32-407d-91d7-f0625e04aba7"). InnerVolumeSpecName "kube-api-access-6z5dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 00:33:42.171695 systemd[1]: var-lib-kubelet-pods-909a099a\x2dbf32\x2d407d\x2d91d7\x2df0625e04aba7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:33:42.264223 kubelet[2740]: I0911 00:33:42.264176 2740 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:42.264223 kubelet[2740]: I0911 00:33:42.264209 2740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z5dg\" (UniqueName: \"kubernetes.io/projected/909a099a-bf32-407d-91d7-f0625e04aba7-kube-api-access-6z5dg\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:42.264223 kubelet[2740]: I0911 00:33:42.264218 2740 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/909a099a-bf32-407d-91d7-f0625e04aba7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:42.638810 systemd[1]: Removed slice kubepods-besteffort-pod909a099a_bf32_407d_91d7_f0625e04aba7.slice - libcontainer container kubepods-besteffort-pod909a099a_bf32_407d_91d7_f0625e04aba7.slice. Sep 11 00:33:42.651615 kubelet[2740]: I0911 00:33:42.651517 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qj774" podStartSLOduration=1.680089803 podStartE2EDuration="17.65150073s" podCreationTimestamp="2025-09-11 00:33:25 +0000 UTC" firstStartedPulling="2025-09-11 00:33:25.849511727 +0000 UTC m=+18.419218765" lastFinishedPulling="2025-09-11 00:33:41.820922654 +0000 UTC m=+34.390629692" observedRunningTime="2025-09-11 00:33:42.64096413 +0000 UTC m=+35.210671188" watchObservedRunningTime="2025-09-11 00:33:42.65150073 +0000 UTC m=+35.221207768" Sep 11 00:33:42.687178 systemd[1]: Created slice kubepods-besteffort-pod631ba79f_1713_4aa1_b2f3_fa47da63363f.slice - libcontainer container kubepods-besteffort-pod631ba79f_1713_4aa1_b2f3_fa47da63363f.slice. Sep 11 00:33:42.755775 containerd[1595]: time="2025-09-11T00:33:42.755723882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179\" id:\"9f6e9635cbf2cf2fdee4b62ab7ab4963da1358eb1c847d5b60b9d00cf9c3f520\" pid:3910 exit_status:1 exited_at:{seconds:1757550822 nanos:755393019}" Sep 11 00:33:42.767713 kubelet[2740]: I0911 00:33:42.767665 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/631ba79f-1713-4aa1-b2f3-fa47da63363f-whisker-backend-key-pair\") pod \"whisker-54d8d949f6-7w49n\" (UID: \"631ba79f-1713-4aa1-b2f3-fa47da63363f\") " pod="calico-system/whisker-54d8d949f6-7w49n" Sep 11 00:33:42.767713 kubelet[2740]: I0911 00:33:42.767705 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txp4f\" (UniqueName: \"kubernetes.io/projected/631ba79f-1713-4aa1-b2f3-fa47da63363f-kube-api-access-txp4f\") pod \"whisker-54d8d949f6-7w49n\" (UID: \"631ba79f-1713-4aa1-b2f3-fa47da63363f\") " pod="calico-system/whisker-54d8d949f6-7w49n" Sep 11 00:33:42.767817 kubelet[2740]: I0911 00:33:42.767726 2740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/631ba79f-1713-4aa1-b2f3-fa47da63363f-whisker-ca-bundle\") pod \"whisker-54d8d949f6-7w49n\" (UID: \"631ba79f-1713-4aa1-b2f3-fa47da63363f\") " pod="calico-system/whisker-54d8d949f6-7w49n" Sep 11 00:33:42.992622 containerd[1595]: time="2025-09-11T00:33:42.992482107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d8d949f6-7w49n,Uid:631ba79f-1713-4aa1-b2f3-fa47da63363f,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:43.128717 systemd-networkd[1499]: calie9afb6f9e21: Link UP Sep 11 00:33:43.129267 systemd-networkd[1499]: calie9afb6f9e21: Gained carrier Sep 11 00:33:43.143812 containerd[1595]: 2025-09-11 00:33:43.015 [INFO][3928] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:43.143812 containerd[1595]: 2025-09-11 00:33:43.032 [INFO][3928] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54d8d949f6--7w49n-eth0 whisker-54d8d949f6- calico-system 631ba79f-1713-4aa1-b2f3-fa47da63363f 874 0 2025-09-11 00:33:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54d8d949f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54d8d949f6-7w49n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie9afb6f9e21 [] [] }} ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-" Sep 11 00:33:43.143812 containerd[1595]: 2025-09-11 00:33:43.032 [INFO][3928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" Sep 11 00:33:43.143812 containerd[1595]: 2025-09-11 00:33:43.088 [INFO][3942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" HandleID="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Workload="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.088 [INFO][3942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" HandleID="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Workload="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54d8d949f6-7w49n", "timestamp":"2025-09-11 00:33:43.088304811 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.088 [INFO][3942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.088 [INFO][3942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.089 [INFO][3942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.096 [INFO][3942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" host="localhost" Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.101 [INFO][3942] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.105 [INFO][3942] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.106 [INFO][3942] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.108 [INFO][3942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:43.144015 containerd[1595]: 2025-09-11 00:33:43.108 [INFO][3942] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" host="localhost" Sep 11 00:33:43.144237 containerd[1595]: 2025-09-11 00:33:43.109 [INFO][3942] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554 Sep 11 00:33:43.144237 containerd[1595]: 2025-09-11 00:33:43.113 [INFO][3942] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" host="localhost" Sep 11 00:33:43.144237 containerd[1595]: 2025-09-11 00:33:43.118 [INFO][3942] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" host="localhost" Sep 11 00:33:43.144237 containerd[1595]: 2025-09-11 00:33:43.118 [INFO][3942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" host="localhost" Sep 11 00:33:43.144237 containerd[1595]: 2025-09-11 00:33:43.118 [INFO][3942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:43.144237 containerd[1595]: 2025-09-11 00:33:43.118 [INFO][3942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" HandleID="k8s-pod-network.a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Workload="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" Sep 11 00:33:43.144393 containerd[1595]: 2025-09-11 00:33:43.121 [INFO][3928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54d8d949f6--7w49n-eth0", GenerateName:"whisker-54d8d949f6-", Namespace:"calico-system", SelfLink:"", UID:"631ba79f-1713-4aa1-b2f3-fa47da63363f", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d8d949f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54d8d949f6-7w49n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9afb6f9e21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:43.144393 containerd[1595]: 2025-09-11 00:33:43.121 [INFO][3928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" Sep 11 00:33:43.144479 containerd[1595]: 2025-09-11 00:33:43.121 [INFO][3928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9afb6f9e21 ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" Sep 11 00:33:43.144479 containerd[1595]: 2025-09-11 00:33:43.129 [INFO][3928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" Sep 11 00:33:43.144524 containerd[1595]: 2025-09-11 00:33:43.130 [INFO][3928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54d8d949f6--7w49n-eth0", GenerateName:"whisker-54d8d949f6-", Namespace:"calico-system", SelfLink:"", UID:"631ba79f-1713-4aa1-b2f3-fa47da63363f", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d8d949f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554", Pod:"whisker-54d8d949f6-7w49n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9afb6f9e21", MAC:"9a:5c:42:6e:fc:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:43.144581 containerd[1595]: 2025-09-11 00:33:43.140 [INFO][3928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" Namespace="calico-system" Pod="whisker-54d8d949f6-7w49n" WorkloadEndpoint="localhost-k8s-whisker--54d8d949f6--7w49n-eth0" Sep 11 00:33:43.281417 containerd[1595]: time="2025-09-11T00:33:43.279724988Z" level=info msg="connecting to shim a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554" address="unix:///run/containerd/s/0aa8d416ad819402977d2d77fef62f3eb33f4c41c1573707447d9b18889ff8fc" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:43.330834 systemd[1]: Started cri-containerd-a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554.scope - libcontainer container a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554. Sep 11 00:33:43.357937 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:43.515250 kubelet[2740]: I0911 00:33:43.515202 2740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909a099a-bf32-407d-91d7-f0625e04aba7" path="/var/lib/kubelet/pods/909a099a-bf32-407d-91d7-f0625e04aba7/volumes" Sep 11 00:33:43.562133 containerd[1595]: time="2025-09-11T00:33:43.562002187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d8d949f6-7w49n,Uid:631ba79f-1713-4aa1-b2f3-fa47da63363f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554\"" Sep 11 00:33:43.563943 containerd[1595]: time="2025-09-11T00:33:43.563897566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:33:43.716842 containerd[1595]: time="2025-09-11T00:33:43.716758818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179\" id:\"2b1c392dfc06240d9dc7494f13aef0a44eccebded220da52987c94b5e8485079\" pid:4115 exit_status:1 exited_at:{seconds:1757550823 nanos:716205627}" Sep 11 00:33:44.900522 systemd-networkd[1499]: calie9afb6f9e21: Gained IPv6LL Sep 11 00:33:45.030016 containerd[1595]: time="2025-09-11T00:33:45.029965083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:45.030940 containerd[1595]: time="2025-09-11T00:33:45.030909560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:33:45.032484 containerd[1595]: time="2025-09-11T00:33:45.032437235Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:45.034595 containerd[1595]: time="2025-09-11T00:33:45.034565019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:45.035175 containerd[1595]: time="2025-09-11T00:33:45.035133469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.471194265s" Sep 11 00:33:45.035205 containerd[1595]: time="2025-09-11T00:33:45.035174476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:33:45.036943 containerd[1595]: time="2025-09-11T00:33:45.036902667Z" level=info msg="CreateContainer within sandbox \"a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:33:45.043974 containerd[1595]: time="2025-09-11T00:33:45.043928517Z" level=info msg="Container e949b9b681e45600063e5c9be166132f3ae155680cf7723af9276c309013ea2b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:45.052097 containerd[1595]: time="2025-09-11T00:33:45.052055228Z" level=info msg="CreateContainer within sandbox \"a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e949b9b681e45600063e5c9be166132f3ae155680cf7723af9276c309013ea2b\"" Sep 11 00:33:45.052543 containerd[1595]: time="2025-09-11T00:33:45.052504173Z" level=info msg="StartContainer for \"e949b9b681e45600063e5c9be166132f3ae155680cf7723af9276c309013ea2b\"" Sep 11 00:33:45.053436 containerd[1595]: time="2025-09-11T00:33:45.053394739Z" level=info msg="connecting to shim e949b9b681e45600063e5c9be166132f3ae155680cf7723af9276c309013ea2b" address="unix:///run/containerd/s/0aa8d416ad819402977d2d77fef62f3eb33f4c41c1573707447d9b18889ff8fc" protocol=ttrpc version=3 Sep 11 00:33:45.077433 systemd[1]: Started cri-containerd-e949b9b681e45600063e5c9be166132f3ae155680cf7723af9276c309013ea2b.scope - libcontainer container e949b9b681e45600063e5c9be166132f3ae155680cf7723af9276c309013ea2b. Sep 11 00:33:45.121477 containerd[1595]: time="2025-09-11T00:33:45.121439801Z" level=info msg="StartContainer for \"e949b9b681e45600063e5c9be166132f3ae155680cf7723af9276c309013ea2b\" returns successfully" Sep 11 00:33:45.122633 containerd[1595]: time="2025-09-11T00:33:45.122603410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:33:46.468165 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:33964.service - OpenSSH per-connection server daemon (10.0.0.1:33964). Sep 11 00:33:46.519341 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 33964 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:33:46.520968 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:46.525213 systemd-logind[1583]: New session 8 of user core. Sep 11 00:33:46.532470 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:33:46.698886 sshd[4222]: Connection closed by 10.0.0.1 port 33964 Sep 11 00:33:46.699176 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:46.703967 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:33964.service: Deactivated successfully. Sep 11 00:33:46.706017 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:33:46.707021 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:33:46.708374 systemd-logind[1583]: Removed session 8. Sep 11 00:33:47.344557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409484747.mount: Deactivated successfully. Sep 11 00:33:47.645055 containerd[1595]: time="2025-09-11T00:33:47.645006264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:47.646005 containerd[1595]: time="2025-09-11T00:33:47.645947023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:33:47.647243 containerd[1595]: time="2025-09-11T00:33:47.647182056Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:47.649693 containerd[1595]: time="2025-09-11T00:33:47.649658835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:47.650348 containerd[1595]: time="2025-09-11T00:33:47.650299650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.527671072s" Sep 11 00:33:47.650348 containerd[1595]: time="2025-09-11T00:33:47.650347419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:33:47.652163 containerd[1595]: time="2025-09-11T00:33:47.652127508Z" level=info msg="CreateContainer within sandbox \"a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:33:47.660129 containerd[1595]: time="2025-09-11T00:33:47.660084794Z" level=info msg="Container b14165764f6203d556146005b3963899d39c20bc7eed22fdf66d362e95c476e6: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:47.674521 containerd[1595]: time="2025-09-11T00:33:47.674494947Z" level=info msg="CreateContainer within sandbox \"a74c845cffe7c82c35e11824e7ffbdb5d50f0d5ebaa39c2e6b6f60757922b554\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b14165764f6203d556146005b3963899d39c20bc7eed22fdf66d362e95c476e6\"" Sep 11 00:33:47.674970 containerd[1595]: time="2025-09-11T00:33:47.674880021Z" level=info msg="StartContainer for \"b14165764f6203d556146005b3963899d39c20bc7eed22fdf66d362e95c476e6\"" Sep 11 00:33:47.675822 containerd[1595]: time="2025-09-11T00:33:47.675799631Z" level=info msg="connecting to shim b14165764f6203d556146005b3963899d39c20bc7eed22fdf66d362e95c476e6" address="unix:///run/containerd/s/0aa8d416ad819402977d2d77fef62f3eb33f4c41c1573707447d9b18889ff8fc" protocol=ttrpc version=3 Sep 11 00:33:47.697482 systemd[1]: Started cri-containerd-b14165764f6203d556146005b3963899d39c20bc7eed22fdf66d362e95c476e6.scope - libcontainer container b14165764f6203d556146005b3963899d39c20bc7eed22fdf66d362e95c476e6. Sep 11 00:33:47.769186 containerd[1595]: time="2025-09-11T00:33:47.769125421Z" level=info msg="StartContainer for \"b14165764f6203d556146005b3963899d39c20bc7eed22fdf66d362e95c476e6\" returns successfully" Sep 11 00:33:48.513196 containerd[1595]: time="2025-09-11T00:33:48.513150649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-tb7cb,Uid:ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:48.513407 containerd[1595]: time="2025-09-11T00:33:48.513368048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4x68b,Uid:32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:48.618397 systemd-networkd[1499]: cali8c84f44ea39: Link UP Sep 11 00:33:48.619168 systemd-networkd[1499]: cali8c84f44ea39: Gained carrier Sep 11 00:33:48.632848 containerd[1595]: 2025-09-11 00:33:48.538 [INFO][4327] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:48.632848 containerd[1595]: 2025-09-11 00:33:48.551 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0 calico-apiserver-56d8d46d86- calico-apiserver ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6 810 0 2025-09-11 00:33:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56d8d46d86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56d8d46d86-tb7cb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8c84f44ea39 [] [] }} ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-" Sep 11 00:33:48.632848 containerd[1595]: 2025-09-11 00:33:48.551 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" Sep 11 00:33:48.632848 containerd[1595]: 2025-09-11 00:33:48.577 [INFO][4357] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" HandleID="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Workload="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.577 [INFO][4357] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" HandleID="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Workload="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56d8d46d86-tb7cb", "timestamp":"2025-09-11 00:33:48.577458329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.577 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.577 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.577 [INFO][4357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.587 [INFO][4357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" host="localhost" Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.591 [INFO][4357] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.594 [INFO][4357] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.596 [INFO][4357] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.598 [INFO][4357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:48.633032 containerd[1595]: 2025-09-11 00:33:48.598 [INFO][4357] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" host="localhost" Sep 11 00:33:48.633235 containerd[1595]: 2025-09-11 00:33:48.599 [INFO][4357] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73 Sep 11 00:33:48.633235 containerd[1595]: 2025-09-11 00:33:48.603 [INFO][4357] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" host="localhost" Sep 11 00:33:48.633235 containerd[1595]: 2025-09-11 00:33:48.609 [INFO][4357] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" host="localhost" Sep 11 00:33:48.633235 containerd[1595]: 2025-09-11 00:33:48.610 [INFO][4357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" host="localhost" Sep 11 00:33:48.633235 containerd[1595]: 2025-09-11 00:33:48.610 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:48.633235 containerd[1595]: 2025-09-11 00:33:48.610 [INFO][4357] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" HandleID="k8s-pod-network.043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Workload="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" Sep 11 00:33:48.633405 containerd[1595]: 2025-09-11 00:33:48.613 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0", GenerateName:"calico-apiserver-56d8d46d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8d46d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56d8d46d86-tb7cb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c84f44ea39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:48.633467 containerd[1595]: 2025-09-11 00:33:48.613 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" Sep 11 00:33:48.633467 containerd[1595]: 2025-09-11 00:33:48.613 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c84f44ea39 ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" Sep 11 00:33:48.633467 containerd[1595]: 2025-09-11 00:33:48.619 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" Sep 11 00:33:48.633537 containerd[1595]: 2025-09-11 00:33:48.619 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0", GenerateName:"calico-apiserver-56d8d46d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8d46d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73", Pod:"calico-apiserver-56d8d46d86-tb7cb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c84f44ea39", MAC:"0e:70:d0:ca:7a:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:48.633586 containerd[1595]: 2025-09-11 00:33:48.629 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-tb7cb" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--tb7cb-eth0" Sep 11 00:33:48.659340 containerd[1595]: time="2025-09-11T00:33:48.659053648Z" level=info msg="connecting to shim 043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73" address="unix:///run/containerd/s/830b9d4633473dcc87c08efb024095a709676f2c2211e148cb38f5fd073c0372" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:48.690491 systemd[1]: Started cri-containerd-043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73.scope - libcontainer container 043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73. Sep 11 00:33:48.705384 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:48.719896 systemd-networkd[1499]: cali42aa6e2b815: Link UP Sep 11 00:33:48.721098 systemd-networkd[1499]: cali42aa6e2b815: Gained carrier Sep 11 00:33:48.729074 kubelet[2740]: I0911 00:33:48.729018 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54d8d949f6-7w49n" podStartSLOduration=2.641488459 podStartE2EDuration="6.728998598s" podCreationTimestamp="2025-09-11 00:33:42 +0000 UTC" firstStartedPulling="2025-09-11 00:33:43.56346963 +0000 UTC m=+36.133176668" lastFinishedPulling="2025-09-11 00:33:47.650979769 +0000 UTC m=+40.220686807" observedRunningTime="2025-09-11 00:33:48.655382514 +0000 UTC m=+41.225089552" watchObservedRunningTime="2025-09-11 00:33:48.728998598 +0000 UTC m=+41.298705636" Sep 11 00:33:48.736536 containerd[1595]: 2025-09-11 00:33:48.547 [INFO][4338] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:48.736536 containerd[1595]: 2025-09-11 00:33:48.559 [INFO][4338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--4x68b-eth0 goldmane-7988f88666- calico-system 32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24 803 0 2025-09-11 00:33:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-4x68b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali42aa6e2b815 [] [] }} ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-" Sep 11 00:33:48.736536 containerd[1595]: 2025-09-11 00:33:48.560 [INFO][4338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-eth0" Sep 11 00:33:48.736536 containerd[1595]: 2025-09-11 00:33:48.588 [INFO][4364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" HandleID="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Workload="localhost-k8s-goldmane--7988f88666--4x68b-eth0" Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.588 [INFO][4364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" HandleID="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Workload="localhost-k8s-goldmane--7988f88666--4x68b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050ea20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-4x68b", "timestamp":"2025-09-11 00:33:48.588231207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.588 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.610 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.610 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.688 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" host="localhost" Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.695 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.699 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.701 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.703 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:48.736742 containerd[1595]: 2025-09-11 00:33:48.703 [INFO][4364] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" host="localhost" Sep 11 00:33:48.736953 containerd[1595]: 2025-09-11 00:33:48.704 [INFO][4364] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6 Sep 11 00:33:48.736953 containerd[1595]: 2025-09-11 00:33:48.708 [INFO][4364] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" host="localhost" Sep 11 00:33:48.736953 containerd[1595]: 2025-09-11 00:33:48.714 [INFO][4364] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" host="localhost" Sep 11 00:33:48.736953 containerd[1595]: 2025-09-11 00:33:48.714 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" host="localhost" Sep 11 00:33:48.736953 containerd[1595]: 2025-09-11 00:33:48.714 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:48.736953 containerd[1595]: 2025-09-11 00:33:48.714 [INFO][4364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" HandleID="k8s-pod-network.ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Workload="localhost-k8s-goldmane--7988f88666--4x68b-eth0" Sep 11 00:33:48.737370 containerd[1595]: 2025-09-11 00:33:48.718 [INFO][4338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--4x68b-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-4x68b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42aa6e2b815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:48.737370 containerd[1595]: 2025-09-11 00:33:48.718 [INFO][4338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-eth0" Sep 11 00:33:48.737521 containerd[1595]: 2025-09-11 00:33:48.718 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42aa6e2b815 ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-eth0" Sep 11 00:33:48.737521 containerd[1595]: 2025-09-11 00:33:48.720 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-eth0" Sep 11 00:33:48.737562 containerd[1595]: 2025-09-11 00:33:48.720 [INFO][4338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--4x68b-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6", Pod:"goldmane-7988f88666-4x68b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42aa6e2b815", MAC:"8a:7d:90:79:31:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:48.737620 containerd[1595]: 2025-09-11 00:33:48.731 [INFO][4338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" Namespace="calico-system" Pod="goldmane-7988f88666-4x68b" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4x68b-eth0" Sep 11 00:33:48.741402 containerd[1595]: time="2025-09-11T00:33:48.741350336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-tb7cb,Uid:ab3ef2d8-99ba-4556-81b2-7b7f1b60d6c6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73\"" Sep 11 00:33:48.743352 containerd[1595]: time="2025-09-11T00:33:48.743145571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:33:48.763805 containerd[1595]: time="2025-09-11T00:33:48.763682802Z" level=info msg="connecting to shim ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6" address="unix:///run/containerd/s/029d3823772cd907f8eefd58600b8b7171441d78f3ea13621797ac943839b6e9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:48.792534 systemd[1]: Started cri-containerd-ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6.scope - libcontainer container ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6. Sep 11 00:33:48.805026 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:48.848147 containerd[1595]: time="2025-09-11T00:33:48.848101730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4x68b,Uid:32bf9bd0-e3e1-4ec6-a60e-bd5a84032d24,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6\"" Sep 11 00:33:49.512484 kubelet[2740]: E0911 00:33:49.512429 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:49.512960 containerd[1595]: time="2025-09-11T00:33:49.512906442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nvmvc,Uid:34335868-9f4e-46e3-a71b-58fb68457ce3,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:49.676454 systemd-networkd[1499]: cali9d8c84179ff: Link UP Sep 11 00:33:49.678863 systemd-networkd[1499]: cali9d8c84179ff: Gained carrier Sep 11 00:33:49.705232 containerd[1595]: 2025-09-11 00:33:49.593 [INFO][4514] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:49.705232 containerd[1595]: 2025-09-11 00:33:49.603 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0 coredns-7c65d6cfc9- kube-system 34335868-9f4e-46e3-a71b-58fb68457ce3 807 0 2025-09-11 00:33:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-nvmvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d8c84179ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-" Sep 11 00:33:49.705232 containerd[1595]: 2025-09-11 00:33:49.603 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" Sep 11 00:33:49.705232 containerd[1595]: 2025-09-11 00:33:49.627 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" HandleID="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Workload="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.628 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" HandleID="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Workload="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-nvmvc", "timestamp":"2025-09-11 00:33:49.627904296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.628 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.628 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.628 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.634 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" host="localhost" Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.638 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.641 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.643 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.645 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:49.705779 containerd[1595]: 2025-09-11 00:33:49.645 [INFO][4529] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" host="localhost" Sep 11 00:33:49.705992 containerd[1595]: 2025-09-11 00:33:49.646 [INFO][4529] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079 Sep 11 00:33:49.705992 containerd[1595]: 2025-09-11 00:33:49.650 [INFO][4529] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" host="localhost" Sep 11 00:33:49.705992 containerd[1595]: 2025-09-11 00:33:49.658 [INFO][4529] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" host="localhost" Sep 11 00:33:49.705992 containerd[1595]: 2025-09-11 00:33:49.658 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" host="localhost" Sep 11 00:33:49.705992 containerd[1595]: 2025-09-11 00:33:49.658 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:49.705992 containerd[1595]: 2025-09-11 00:33:49.658 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" HandleID="k8s-pod-network.743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Workload="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" Sep 11 00:33:49.706111 containerd[1595]: 2025-09-11 00:33:49.669 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34335868-9f4e-46e3-a71b-58fb68457ce3", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-nvmvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d8c84179ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:49.706177 containerd[1595]: 2025-09-11 00:33:49.670 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" Sep 11 00:33:49.706177 containerd[1595]: 2025-09-11 00:33:49.670 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d8c84179ff ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" Sep 11 00:33:49.706177 containerd[1595]: 2025-09-11 00:33:49.678 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" Sep 11 00:33:49.706245 containerd[1595]: 2025-09-11 00:33:49.679 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34335868-9f4e-46e3-a71b-58fb68457ce3", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079", Pod:"coredns-7c65d6cfc9-nvmvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d8c84179ff", MAC:"82:ff:34:01:a4:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:49.706245 containerd[1595]: 2025-09-11 00:33:49.697 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nvmvc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nvmvc-eth0" Sep 11 00:33:49.729479 containerd[1595]: time="2025-09-11T00:33:49.729438017Z" level=info msg="connecting to shim 743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079" address="unix:///run/containerd/s/5d5a145fbab246e0588fd33b3fa23ce659f2e0fff48fffdd7b037ba3e4eeec18" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:49.759465 systemd[1]: Started cri-containerd-743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079.scope - libcontainer container 743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079. Sep 11 00:33:49.771530 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:49.801044 containerd[1595]: time="2025-09-11T00:33:49.801003101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nvmvc,Uid:34335868-9f4e-46e3-a71b-58fb68457ce3,Namespace:kube-system,Attempt:0,} returns sandbox id \"743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079\"" Sep 11 00:33:49.801671 kubelet[2740]: E0911 00:33:49.801653 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:49.803409 containerd[1595]: time="2025-09-11T00:33:49.803372275Z" level=info msg="CreateContainer within sandbox \"743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:33:49.814443 containerd[1595]: time="2025-09-11T00:33:49.814403706Z" level=info msg="Container e41e681e017696023c5bdf2cc46eab26b6e9d093a90106b311895e96c40505c2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:49.821988 containerd[1595]: time="2025-09-11T00:33:49.821955023Z" level=info msg="CreateContainer within sandbox \"743c9fdecebd9ad998e63776972790f1341af5d854030ba6e2249923b63e7079\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e41e681e017696023c5bdf2cc46eab26b6e9d093a90106b311895e96c40505c2\"" Sep 11 00:33:49.822378 containerd[1595]: time="2025-09-11T00:33:49.822348162Z" level=info msg="StartContainer for \"e41e681e017696023c5bdf2cc46eab26b6e9d093a90106b311895e96c40505c2\"" Sep 11 00:33:49.823230 containerd[1595]: time="2025-09-11T00:33:49.823199052Z" level=info msg="connecting to shim e41e681e017696023c5bdf2cc46eab26b6e9d093a90106b311895e96c40505c2" address="unix:///run/containerd/s/5d5a145fbab246e0588fd33b3fa23ce659f2e0fff48fffdd7b037ba3e4eeec18" protocol=ttrpc version=3 Sep 11 00:33:49.845462 systemd[1]: Started cri-containerd-e41e681e017696023c5bdf2cc46eab26b6e9d093a90106b311895e96c40505c2.scope - libcontainer container e41e681e017696023c5bdf2cc46eab26b6e9d093a90106b311895e96c40505c2. Sep 11 00:33:49.878021 containerd[1595]: time="2025-09-11T00:33:49.877980901Z" level=info msg="StartContainer for \"e41e681e017696023c5bdf2cc46eab26b6e9d093a90106b311895e96c40505c2\" returns successfully" Sep 11 00:33:50.469482 systemd-networkd[1499]: cali8c84f44ea39: Gained IPv6LL Sep 11 00:33:50.469824 systemd-networkd[1499]: cali42aa6e2b815: Gained IPv6LL Sep 11 00:33:50.513511 containerd[1595]: time="2025-09-11T00:33:50.513465263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d66f559c5-sxjj5,Uid:a1e2438f-98aa-403b-a122-a27d7015cc11,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:50.519428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510622558.mount: Deactivated successfully. Sep 11 00:33:50.631280 systemd-networkd[1499]: cali41ccccac7dd: Link UP Sep 11 00:33:50.631819 systemd-networkd[1499]: cali41ccccac7dd: Gained carrier Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.545 [INFO][4654] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.556 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0 calico-kube-controllers-5d66f559c5- calico-system a1e2438f-98aa-403b-a122-a27d7015cc11 806 0 2025-09-11 00:33:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d66f559c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d66f559c5-sxjj5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali41ccccac7dd [] [] }} ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.556 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.585 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" HandleID="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Workload="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.586 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" HandleID="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Workload="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e0fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d66f559c5-sxjj5", "timestamp":"2025-09-11 00:33:50.585650232 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.586 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.586 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.586 [INFO][4667] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.591 [INFO][4667] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.596 [INFO][4667] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.601 [INFO][4667] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.603 [INFO][4667] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.605 [INFO][4667] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.605 [INFO][4667] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.609 [INFO][4667] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8 Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.612 [INFO][4667] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.624 [INFO][4667] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.624 [INFO][4667] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" host="localhost" Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.624 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:50.646499 containerd[1595]: 2025-09-11 00:33:50.624 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" HandleID="k8s-pod-network.31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Workload="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" Sep 11 00:33:50.647430 containerd[1595]: 2025-09-11 00:33:50.628 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0", GenerateName:"calico-kube-controllers-5d66f559c5-", Namespace:"calico-system", SelfLink:"", UID:"a1e2438f-98aa-403b-a122-a27d7015cc11", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d66f559c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d66f559c5-sxjj5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41ccccac7dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:50.647430 containerd[1595]: 2025-09-11 00:33:50.628 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" Sep 11 00:33:50.647430 containerd[1595]: 2025-09-11 00:33:50.628 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41ccccac7dd ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" Sep 11 00:33:50.647430 containerd[1595]: 2025-09-11 00:33:50.631 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" Sep 11 00:33:50.647430 containerd[1595]: 2025-09-11 00:33:50.632 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0", GenerateName:"calico-kube-controllers-5d66f559c5-", Namespace:"calico-system", SelfLink:"", UID:"a1e2438f-98aa-403b-a122-a27d7015cc11", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d66f559c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8", Pod:"calico-kube-controllers-5d66f559c5-sxjj5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41ccccac7dd", MAC:"46:8d:d1:90:f6:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:50.647430 containerd[1595]: 2025-09-11 00:33:50.642 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" Namespace="calico-system" Pod="calico-kube-controllers-5d66f559c5-sxjj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d66f559c5--sxjj5-eth0" Sep 11 00:33:50.653535 kubelet[2740]: E0911 00:33:50.653380 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:50.683000 kubelet[2740]: I0911 00:33:50.682014 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nvmvc" podStartSLOduration=36.681998874 podStartE2EDuration="36.681998874s" podCreationTimestamp="2025-09-11 00:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:50.68168861 +0000 UTC m=+43.251395648" watchObservedRunningTime="2025-09-11 00:33:50.681998874 +0000 UTC m=+43.251705912" Sep 11 00:33:50.729689 containerd[1595]: time="2025-09-11T00:33:50.729555163Z" level=info msg="connecting to shim 31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8" address="unix:///run/containerd/s/b85fb44253fb2b78f3c8fefaa6d4f8eadd7efb441e6f68b99f8789a2c01d80ec" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:50.765545 systemd[1]: Started cri-containerd-31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8.scope - libcontainer container 31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8. Sep 11 00:33:50.778563 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:50.812196 containerd[1595]: time="2025-09-11T00:33:50.812121857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d66f559c5-sxjj5,Uid:a1e2438f-98aa-403b-a122-a27d7015cc11,Namespace:calico-system,Attempt:0,} returns sandbox id \"31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8\"" Sep 11 00:33:50.970776 containerd[1595]: time="2025-09-11T00:33:50.970717075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:50.971493 containerd[1595]: time="2025-09-11T00:33:50.971450784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:33:50.972588 containerd[1595]: time="2025-09-11T00:33:50.972535834Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:50.974838 containerd[1595]: time="2025-09-11T00:33:50.974799880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:50.975484 containerd[1595]: time="2025-09-11T00:33:50.975450393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.232261681s" Sep 11 00:33:50.975484 containerd[1595]: time="2025-09-11T00:33:50.975482583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:33:50.976416 containerd[1595]: time="2025-09-11T00:33:50.976380000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:33:50.977259 containerd[1595]: time="2025-09-11T00:33:50.977234937Z" level=info msg="CreateContainer within sandbox \"043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:33:50.985374 containerd[1595]: time="2025-09-11T00:33:50.985276034Z" level=info msg="Container 2e12e200a25921a4a52560fdfaae0406673bb0ad9b39b5bedd0d2b96f60876dc: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:50.998093 containerd[1595]: time="2025-09-11T00:33:50.998041040Z" level=info msg="CreateContainer within sandbox \"043365398bef3da8ffd46b51a0f40e5ff9da48aca7462261b7162a6738762f73\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e12e200a25921a4a52560fdfaae0406673bb0ad9b39b5bedd0d2b96f60876dc\"" Sep 11 00:33:51.000214 containerd[1595]: time="2025-09-11T00:33:50.999783927Z" level=info msg="StartContainer for \"2e12e200a25921a4a52560fdfaae0406673bb0ad9b39b5bedd0d2b96f60876dc\"" Sep 11 00:33:51.016912 containerd[1595]: time="2025-09-11T00:33:51.016870413Z" level=info msg="connecting to shim 2e12e200a25921a4a52560fdfaae0406673bb0ad9b39b5bedd0d2b96f60876dc" address="unix:///run/containerd/s/830b9d4633473dcc87c08efb024095a709676f2c2211e148cb38f5fd073c0372" protocol=ttrpc version=3 Sep 11 00:33:51.051268 systemd[1]: Started cri-containerd-2e12e200a25921a4a52560fdfaae0406673bb0ad9b39b5bedd0d2b96f60876dc.scope - libcontainer container 2e12e200a25921a4a52560fdfaae0406673bb0ad9b39b5bedd0d2b96f60876dc. Sep 11 00:33:51.115842 containerd[1595]: time="2025-09-11T00:33:51.115804979Z" level=info msg="StartContainer for \"2e12e200a25921a4a52560fdfaae0406673bb0ad9b39b5bedd0d2b96f60876dc\" returns successfully" Sep 11 00:33:51.512953 kubelet[2740]: E0911 00:33:51.512829 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:51.517944 containerd[1595]: time="2025-09-11T00:33:51.513700195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-xt8mj,Uid:73ab4f26-4242-414c-84be-d112bb35aa1e,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:51.517944 containerd[1595]: time="2025-09-11T00:33:51.513764276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqsdg,Uid:65546ac2-cdf2-4628-acd4-59736de8e8fe,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:51.632300 systemd-networkd[1499]: cali869266f428c: Link UP Sep 11 00:33:51.633501 systemd-networkd[1499]: cali869266f428c: Gained carrier Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.547 [INFO][4799] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.560 [INFO][4799] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0 calico-apiserver-56d8d46d86- calico-apiserver 73ab4f26-4242-414c-84be-d112bb35aa1e 809 0 2025-09-11 00:33:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56d8d46d86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56d8d46d86-xt8mj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali869266f428c [] [] }} ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.560 [INFO][4799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.593 [INFO][4827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" HandleID="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Workload="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.594 [INFO][4827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" HandleID="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Workload="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56d8d46d86-xt8mj", "timestamp":"2025-09-11 00:33:51.593484426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.594 [INFO][4827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.594 [INFO][4827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.594 [INFO][4827] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.600 [INFO][4827] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.604 [INFO][4827] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.608 [INFO][4827] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.610 [INFO][4827] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.612 [INFO][4827] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.612 [INFO][4827] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.614 [INFO][4827] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093 Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.617 [INFO][4827] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.623 [INFO][4827] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.623 [INFO][4827] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" host="localhost" Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.623 [INFO][4827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:51.648850 containerd[1595]: 2025-09-11 00:33:51.623 [INFO][4827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" HandleID="k8s-pod-network.5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Workload="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" Sep 11 00:33:51.649488 containerd[1595]: 2025-09-11 00:33:51.629 [INFO][4799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0", GenerateName:"calico-apiserver-56d8d46d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"73ab4f26-4242-414c-84be-d112bb35aa1e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8d46d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56d8d46d86-xt8mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali869266f428c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.649488 containerd[1595]: 2025-09-11 00:33:51.630 [INFO][4799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" Sep 11 00:33:51.649488 containerd[1595]: 2025-09-11 00:33:51.630 [INFO][4799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali869266f428c ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" Sep 11 00:33:51.649488 containerd[1595]: 2025-09-11 00:33:51.634 [INFO][4799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" Sep 11 00:33:51.649488 containerd[1595]: 2025-09-11 00:33:51.634 [INFO][4799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0", GenerateName:"calico-apiserver-56d8d46d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"73ab4f26-4242-414c-84be-d112bb35aa1e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8d46d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093", Pod:"calico-apiserver-56d8d46d86-xt8mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali869266f428c", MAC:"8e:6b:f5:5d:5e:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.649488 containerd[1595]: 2025-09-11 00:33:51.645 [INFO][4799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" Namespace="calico-apiserver" Pod="calico-apiserver-56d8d46d86-xt8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8d46d86--xt8mj-eth0" Sep 11 00:33:51.659232 kubelet[2740]: E0911 00:33:51.659197 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:51.667559 kubelet[2740]: I0911 00:33:51.667507 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56d8d46d86-tb7cb" podStartSLOduration=27.434094284 podStartE2EDuration="29.667490017s" podCreationTimestamp="2025-09-11 00:33:22 +0000 UTC" firstStartedPulling="2025-09-11 00:33:48.742795974 +0000 UTC m=+41.312503012" lastFinishedPulling="2025-09-11 00:33:50.976191707 +0000 UTC m=+43.545898745" observedRunningTime="2025-09-11 00:33:51.667175114 +0000 UTC m=+44.236882152" watchObservedRunningTime="2025-09-11 00:33:51.667490017 +0000 UTC m=+44.237197055" Sep 11 00:33:51.682758 containerd[1595]: time="2025-09-11T00:33:51.682688303Z" level=info msg="connecting to shim 5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093" address="unix:///run/containerd/s/950b0967d6eec49e692601fa5a4215fccc9b777778757b0b3c81036f5b7e497f" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:51.684590 systemd-networkd[1499]: cali9d8c84179ff: Gained IPv6LL Sep 11 00:33:51.721583 systemd[1]: Started cri-containerd-5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093.scope - libcontainer container 5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093. Sep 11 00:33:51.723601 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:47364.service - OpenSSH per-connection server daemon (10.0.0.1:47364). Sep 11 00:33:51.739511 systemd-networkd[1499]: cali5fac1109ed6: Link UP Sep 11 00:33:51.740112 systemd-networkd[1499]: cali5fac1109ed6: Gained carrier Sep 11 00:33:51.755805 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.552 [INFO][4811] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.564 [INFO][4811] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0 coredns-7c65d6cfc9- kube-system 65546ac2-cdf2-4628-acd4-59736de8e8fe 808 0 2025-09-11 00:33:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-vqsdg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5fac1109ed6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.564 [INFO][4811] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.596 [INFO][4834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" HandleID="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Workload="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.596 [INFO][4834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" HandleID="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Workload="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-vqsdg", "timestamp":"2025-09-11 00:33:51.596351836 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.596 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.623 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.623 [INFO][4834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.700 [INFO][4834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.705 [INFO][4834] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.715 [INFO][4834] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.717 [INFO][4834] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.720 [INFO][4834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.720 [INFO][4834] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.721 [INFO][4834] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8 Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.724 [INFO][4834] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.731 [INFO][4834] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.731 [INFO][4834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" host="localhost" Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.731 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:51.758670 containerd[1595]: 2025-09-11 00:33:51.731 [INFO][4834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" HandleID="k8s-pod-network.a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Workload="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" Sep 11 00:33:51.760147 containerd[1595]: 2025-09-11 00:33:51.735 [INFO][4811] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65546ac2-cdf2-4628-acd4-59736de8e8fe", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-vqsdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fac1109ed6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.760147 containerd[1595]: 2025-09-11 00:33:51.735 [INFO][4811] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" Sep 11 00:33:51.760147 containerd[1595]: 2025-09-11 00:33:51.735 [INFO][4811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fac1109ed6 ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" Sep 11 00:33:51.760147 containerd[1595]: 2025-09-11 00:33:51.740 [INFO][4811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" Sep 11 00:33:51.760147 containerd[1595]: 2025-09-11 00:33:51.740 [INFO][4811] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65546ac2-cdf2-4628-acd4-59736de8e8fe", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8", Pod:"coredns-7c65d6cfc9-vqsdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fac1109ed6", MAC:"b6:5e:e6:d2:e9:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.760147 containerd[1595]: 2025-09-11 00:33:51.751 [INFO][4811] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqsdg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqsdg-eth0" Sep 11 00:33:51.783521 sshd[4884]: Accepted publickey for core from 10.0.0.1 port 47364 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:33:51.791878 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:51.797388 systemd-logind[1583]: New session 9 of user core. Sep 11 00:33:51.801460 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:33:51.935405 containerd[1595]: time="2025-09-11T00:33:51.935028468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8d46d86-xt8mj,Uid:73ab4f26-4242-414c-84be-d112bb35aa1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093\"" Sep 11 00:33:51.948129 containerd[1595]: time="2025-09-11T00:33:51.948079698Z" level=info msg="CreateContainer within sandbox \"5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:33:51.952327 containerd[1595]: time="2025-09-11T00:33:51.951787267Z" level=info msg="connecting to shim a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8" address="unix:///run/containerd/s/d3222196c0a0e1348c9b5f7b08d434ffdbab2b4322b78387845766b2358310ff" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:51.967645 containerd[1595]: time="2025-09-11T00:33:51.966772082Z" level=info msg="Container 6e784a7caf9784f8fe717beda2515680a8556f2481efd9069be2ba0ae2e60242: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:51.986607 systemd[1]: Started cri-containerd-a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8.scope - libcontainer container a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8. Sep 11 00:33:52.001063 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:52.015665 containerd[1595]: time="2025-09-11T00:33:52.015552545Z" level=info msg="CreateContainer within sandbox \"5840a00d9155b85949dbc78cf6ab6d30e207a37c9426e521463f2e5cbdfd5093\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e784a7caf9784f8fe717beda2515680a8556f2481efd9069be2ba0ae2e60242\"" Sep 11 00:33:52.016210 containerd[1595]: time="2025-09-11T00:33:52.016176177Z" level=info msg="StartContainer for \"6e784a7caf9784f8fe717beda2515680a8556f2481efd9069be2ba0ae2e60242\"" Sep 11 00:33:52.017142 containerd[1595]: time="2025-09-11T00:33:52.017108289Z" level=info msg="connecting to shim 6e784a7caf9784f8fe717beda2515680a8556f2481efd9069be2ba0ae2e60242" address="unix:///run/containerd/s/950b0967d6eec49e692601fa5a4215fccc9b777778757b0b3c81036f5b7e497f" protocol=ttrpc version=3 Sep 11 00:33:52.025252 sshd[4907]: Connection closed by 10.0.0.1 port 47364 Sep 11 00:33:52.025889 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:52.031600 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:47364.service: Deactivated successfully. Sep 11 00:33:52.032378 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:33:52.036068 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:33:52.039646 systemd-logind[1583]: Removed session 9. Sep 11 00:33:52.043725 containerd[1595]: time="2025-09-11T00:33:52.043630362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqsdg,Uid:65546ac2-cdf2-4628-acd4-59736de8e8fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8\"" Sep 11 00:33:52.044595 kubelet[2740]: E0911 00:33:52.044573 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:52.047121 containerd[1595]: time="2025-09-11T00:33:52.047087409Z" level=info msg="CreateContainer within sandbox \"a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:33:52.051690 systemd[1]: Started cri-containerd-6e784a7caf9784f8fe717beda2515680a8556f2481efd9069be2ba0ae2e60242.scope - libcontainer container 6e784a7caf9784f8fe717beda2515680a8556f2481efd9069be2ba0ae2e60242. Sep 11 00:33:52.058060 containerd[1595]: time="2025-09-11T00:33:52.058030344Z" level=info msg="Container 22a1cd96926ac185061921be84efbde0e522752c06d36ec406b7f8494419efb2: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:52.073988 containerd[1595]: time="2025-09-11T00:33:52.073484959Z" level=info msg="CreateContainer within sandbox \"a21f577fd64ead63f67d8db7e7b167805338389a4b6e05cce84993d5735128f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22a1cd96926ac185061921be84efbde0e522752c06d36ec406b7f8494419efb2\"" Sep 11 00:33:52.075231 containerd[1595]: time="2025-09-11T00:33:52.074138767Z" level=info msg="StartContainer for \"22a1cd96926ac185061921be84efbde0e522752c06d36ec406b7f8494419efb2\"" Sep 11 00:33:52.075231 containerd[1595]: time="2025-09-11T00:33:52.074985920Z" level=info msg="connecting to shim 22a1cd96926ac185061921be84efbde0e522752c06d36ec406b7f8494419efb2" address="unix:///run/containerd/s/d3222196c0a0e1348c9b5f7b08d434ffdbab2b4322b78387845766b2358310ff" protocol=ttrpc version=3 Sep 11 00:33:52.102506 systemd[1]: Started cri-containerd-22a1cd96926ac185061921be84efbde0e522752c06d36ec406b7f8494419efb2.scope - libcontainer container 22a1cd96926ac185061921be84efbde0e522752c06d36ec406b7f8494419efb2. Sep 11 00:33:52.201617 containerd[1595]: time="2025-09-11T00:33:52.201586853Z" level=info msg="StartContainer for \"6e784a7caf9784f8fe717beda2515680a8556f2481efd9069be2ba0ae2e60242\" returns successfully" Sep 11 00:33:52.243670 containerd[1595]: time="2025-09-11T00:33:52.243506584Z" level=info msg="StartContainer for \"22a1cd96926ac185061921be84efbde0e522752c06d36ec406b7f8494419efb2\" returns successfully" Sep 11 00:33:52.513071 containerd[1595]: time="2025-09-11T00:33:52.513020526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sqkl,Uid:5c283416-98c3-4e6d-ae4d-ab8f1e80bed3,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:52.580834 systemd-networkd[1499]: cali41ccccac7dd: Gained IPv6LL Sep 11 00:33:52.664193 kubelet[2740]: E0911 00:33:52.664158 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:52.664657 kubelet[2740]: E0911 00:33:52.664302 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:52.708487 systemd-networkd[1499]: cali869266f428c: Gained IPv6LL Sep 11 00:33:52.985120 kubelet[2740]: I0911 00:33:52.985047 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56d8d46d86-xt8mj" podStartSLOduration=30.985029762 podStartE2EDuration="30.985029762s" podCreationTimestamp="2025-09-11 00:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:52.984946986 +0000 UTC m=+45.554654024" watchObservedRunningTime="2025-09-11 00:33:52.985029762 +0000 UTC m=+45.554736800" Sep 11 00:33:53.000516 kubelet[2740]: I0911 00:33:53.000456 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vqsdg" podStartSLOduration=39.00043838 podStartE2EDuration="39.00043838s" podCreationTimestamp="2025-09-11 00:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:53.000351405 +0000 UTC m=+45.570058443" watchObservedRunningTime="2025-09-11 00:33:53.00043838 +0000 UTC m=+45.570145418" Sep 11 00:33:53.134021 systemd-networkd[1499]: calia55607ca930: Link UP Sep 11 00:33:53.134793 systemd-networkd[1499]: calia55607ca930: Gained carrier Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.019 [INFO][5068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.032 [INFO][5068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8sqkl-eth0 csi-node-driver- calico-system 5c283416-98c3-4e6d-ae4d-ab8f1e80bed3 702 0 2025-09-11 00:33:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8sqkl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia55607ca930 [] [] }} ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.032 [INFO][5068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-eth0" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.066 [INFO][5085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" HandleID="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Workload="localhost-k8s-csi--node--driver--8sqkl-eth0" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.066 [INFO][5085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" HandleID="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Workload="localhost-k8s-csi--node--driver--8sqkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b7220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8sqkl", "timestamp":"2025-09-11 00:33:53.066728404 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.066 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.066 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.066 [INFO][5085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.073 [INFO][5085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.104 [INFO][5085] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.111 [INFO][5085] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.112 [INFO][5085] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.114 [INFO][5085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.114 [INFO][5085] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.116 [INFO][5085] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.119 [INFO][5085] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.125 [INFO][5085] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.125 [INFO][5085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" host="localhost" Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.125 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:53.154374 containerd[1595]: 2025-09-11 00:33:53.125 [INFO][5085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" HandleID="k8s-pod-network.c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Workload="localhost-k8s-csi--node--driver--8sqkl-eth0" Sep 11 00:33:53.155510 containerd[1595]: 2025-09-11 00:33:53.131 [INFO][5068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8sqkl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c283416-98c3-4e6d-ae4d-ab8f1e80bed3", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8sqkl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia55607ca930", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:53.155510 containerd[1595]: 2025-09-11 00:33:53.132 [INFO][5068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-eth0" Sep 11 00:33:53.155510 containerd[1595]: 2025-09-11 00:33:53.132 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia55607ca930 ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-eth0" Sep 11 00:33:53.155510 containerd[1595]: 2025-09-11 00:33:53.134 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-eth0" Sep 11 00:33:53.155510 containerd[1595]: 2025-09-11 00:33:53.135 [INFO][5068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8sqkl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c283416-98c3-4e6d-ae4d-ab8f1e80bed3", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c", Pod:"csi-node-driver-8sqkl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia55607ca930", MAC:"0a:e2:d1:bd:4d:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:53.155510 containerd[1595]: 2025-09-11 00:33:53.149 [INFO][5068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" Namespace="calico-system" Pod="csi-node-driver-8sqkl" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sqkl-eth0" Sep 11 00:33:53.221459 systemd-networkd[1499]: cali5fac1109ed6: Gained IPv6LL Sep 11 00:33:53.361360 containerd[1595]: time="2025-09-11T00:33:53.361065634Z" level=info msg="connecting to shim c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c" address="unix:///run/containerd/s/5917540047d9d5e3616c2fb5678ca7e3e08310e04b0d54ee83ab70e71cf83c1f" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:53.408569 systemd[1]: Started cri-containerd-c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c.scope - libcontainer container c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c. Sep 11 00:33:53.436568 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:53.465066 containerd[1595]: time="2025-09-11T00:33:53.465032976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sqkl,Uid:5c283416-98c3-4e6d-ae4d-ab8f1e80bed3,Namespace:calico-system,Attempt:0,} returns sandbox id \"c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c\"" Sep 11 00:33:53.589866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140601352.mount: Deactivated successfully. Sep 11 00:33:53.674141 kubelet[2740]: E0911 00:33:53.673622 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:54.146668 containerd[1595]: time="2025-09-11T00:33:54.146612299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:54.147556 containerd[1595]: time="2025-09-11T00:33:54.147528330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:33:54.148688 containerd[1595]: time="2025-09-11T00:33:54.148650318Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:54.150950 containerd[1595]: time="2025-09-11T00:33:54.150917247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:54.151705 containerd[1595]: time="2025-09-11T00:33:54.151667376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.175255997s" Sep 11 00:33:54.151856 containerd[1595]: time="2025-09-11T00:33:54.151693786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:33:54.153804 containerd[1595]: time="2025-09-11T00:33:54.153741293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:33:54.155070 containerd[1595]: time="2025-09-11T00:33:54.155029102Z" level=info msg="CreateContainer within sandbox \"ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:33:54.163512 containerd[1595]: time="2025-09-11T00:33:54.163468719Z" level=info msg="Container 0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:54.172776 containerd[1595]: time="2025-09-11T00:33:54.172726633Z" level=info msg="CreateContainer within sandbox \"ecb44e405e79bb06ee637af3df29504c86fde9f4aae89efb6eb4ce806858eee6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046\"" Sep 11 00:33:54.173252 containerd[1595]: time="2025-09-11T00:33:54.173217546Z" level=info msg="StartContainer for \"0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046\"" Sep 11 00:33:54.174256 containerd[1595]: time="2025-09-11T00:33:54.174225529Z" level=info msg="connecting to shim 0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046" address="unix:///run/containerd/s/029d3823772cd907f8eefd58600b8b7171441d78f3ea13621797ac943839b6e9" protocol=ttrpc version=3 Sep 11 00:33:54.202465 systemd[1]: Started cri-containerd-0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046.scope - libcontainer container 0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046. Sep 11 00:33:54.255126 containerd[1595]: time="2025-09-11T00:33:54.255087606Z" level=info msg="StartContainer for \"0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046\" returns successfully" Sep 11 00:33:54.372464 systemd-networkd[1499]: calia55607ca930: Gained IPv6LL Sep 11 00:33:54.589425 kubelet[2740]: I0911 00:33:54.589258 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:33:54.589779 kubelet[2740]: E0911 00:33:54.589629 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:54.677373 kubelet[2740]: E0911 00:33:54.677046 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:54.678648 kubelet[2740]: E0911 00:33:54.678623 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:54.689697 kubelet[2740]: I0911 00:33:54.689619 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-4x68b" podStartSLOduration=25.388064149 podStartE2EDuration="30.689598783s" podCreationTimestamp="2025-09-11 00:33:24 +0000 UTC" firstStartedPulling="2025-09-11 00:33:48.852031059 +0000 UTC m=+41.421738087" lastFinishedPulling="2025-09-11 00:33:54.153565683 +0000 UTC m=+46.723272721" observedRunningTime="2025-09-11 00:33:54.688805744 +0000 UTC m=+47.258512782" watchObservedRunningTime="2025-09-11 00:33:54.689598783 +0000 UTC m=+47.259305821" Sep 11 00:33:55.605295 systemd-networkd[1499]: vxlan.calico: Link UP Sep 11 00:33:55.606299 systemd-networkd[1499]: vxlan.calico: Gained carrier Sep 11 00:33:55.680796 kubelet[2740]: I0911 00:33:55.680750 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:33:55.685670 kubelet[2740]: E0911 00:33:55.685638 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:33:57.259754 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:47368.service - OpenSSH per-connection server daemon (10.0.0.1:47368). Sep 11 00:33:57.327846 sshd[5388]: Accepted publickey for core from 10.0.0.1 port 47368 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:33:57.330219 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:57.337860 systemd-logind[1583]: New session 10 of user core. Sep 11 00:33:57.343468 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:33:57.573198 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Sep 11 00:33:57.590117 sshd[5394]: Connection closed by 10.0.0.1 port 47368 Sep 11 00:33:57.590525 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:57.599791 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:47368.service: Deactivated successfully. Sep 11 00:33:57.602453 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:33:57.603751 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:33:57.610564 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:47378.service - OpenSSH per-connection server daemon (10.0.0.1:47378). Sep 11 00:33:57.612978 systemd-logind[1583]: Removed session 10. Sep 11 00:33:57.659430 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:33:57.660203 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:57.668646 systemd-logind[1583]: New session 11 of user core. Sep 11 00:33:57.678439 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:33:58.067136 containerd[1595]: time="2025-09-11T00:33:58.066909133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:58.067597 sshd[5411]: Connection closed by 10.0.0.1 port 47378 Sep 11 00:33:58.069509 containerd[1595]: time="2025-09-11T00:33:58.069450786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:33:58.069760 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:58.069965 containerd[1595]: time="2025-09-11T00:33:58.069931489Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:58.073241 containerd[1595]: time="2025-09-11T00:33:58.073140856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:58.074675 containerd[1595]: time="2025-09-11T00:33:58.074169949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.920389443s" Sep 11 00:33:58.074675 containerd[1595]: time="2025-09-11T00:33:58.074217568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:33:58.077948 containerd[1595]: time="2025-09-11T00:33:58.077904723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:33:58.079930 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:47378.service: Deactivated successfully. Sep 11 00:33:58.082096 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:33:58.086527 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:33:58.089376 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:47388.service - OpenSSH per-connection server daemon (10.0.0.1:47388). Sep 11 00:33:58.091545 systemd-logind[1583]: Removed session 11. Sep 11 00:33:58.106150 containerd[1595]: time="2025-09-11T00:33:58.105789602Z" level=info msg="CreateContainer within sandbox \"31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:33:58.118945 containerd[1595]: time="2025-09-11T00:33:58.117878726Z" level=info msg="Container 88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:58.144386 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 47388 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:33:58.146435 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:33:58.149853 containerd[1595]: time="2025-09-11T00:33:58.149766954Z" level=info msg="CreateContainer within sandbox \"31255323490a8675bebd0fa3514dac16084b7f71a12836cf9e73c792c82e03f8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e\"" Sep 11 00:33:58.151892 containerd[1595]: time="2025-09-11T00:33:58.151856277Z" level=info msg="StartContainer for \"88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e\"" Sep 11 00:33:58.153618 containerd[1595]: time="2025-09-11T00:33:58.153289920Z" level=info msg="connecting to shim 88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e" address="unix:///run/containerd/s/b85fb44253fb2b78f3c8fefaa6d4f8eadd7efb441e6f68b99f8789a2c01d80ec" protocol=ttrpc version=3 Sep 11 00:33:58.155396 systemd-logind[1583]: New session 12 of user core. Sep 11 00:33:58.169582 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:33:58.199868 systemd[1]: Started cri-containerd-88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e.scope - libcontainer container 88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e. Sep 11 00:33:58.440341 containerd[1595]: time="2025-09-11T00:33:58.440206132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179\" id:\"d28997ddee37a1cd6618ed5345162cf3b076764734798e2116db3dec9dba46f3\" pid:5453 exited_at:{seconds:1757550838 nanos:438172263}" Sep 11 00:33:58.451043 sshd[5439]: Connection closed by 10.0.0.1 port 47388 Sep 11 00:33:58.452662 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:58.458340 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:47388.service: Deactivated successfully. Sep 11 00:33:58.461268 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:33:58.462581 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:33:58.463757 systemd-logind[1583]: Removed session 12. Sep 11 00:33:58.530459 containerd[1595]: time="2025-09-11T00:33:58.530376156Z" level=info msg="StartContainer for \"88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e\" returns successfully" Sep 11 00:33:58.715741 kubelet[2740]: I0911 00:33:58.715581 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d66f559c5-sxjj5" podStartSLOduration=26.451945898 podStartE2EDuration="33.715561762s" podCreationTimestamp="2025-09-11 00:33:25 +0000 UTC" firstStartedPulling="2025-09-11 00:33:50.813218609 +0000 UTC m=+43.382925647" lastFinishedPulling="2025-09-11 00:33:58.076834473 +0000 UTC m=+50.646541511" observedRunningTime="2025-09-11 00:33:58.715135953 +0000 UTC m=+51.284843001" watchObservedRunningTime="2025-09-11 00:33:58.715561762 +0000 UTC m=+51.285268801" Sep 11 00:33:58.748669 containerd[1595]: time="2025-09-11T00:33:58.748620699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e\" id:\"ac40a234d92994a37b560ca521de96e1b36d37381b695c3acfcedb4dfe830601\" pid:5522 exited_at:{seconds:1757550838 nanos:748394204}" Sep 11 00:33:59.608181 containerd[1595]: time="2025-09-11T00:33:59.608102885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:59.608739 containerd[1595]: time="2025-09-11T00:33:59.608712601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:33:59.609926 containerd[1595]: time="2025-09-11T00:33:59.609892767Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:59.611657 containerd[1595]: time="2025-09-11T00:33:59.611624388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:59.612142 containerd[1595]: time="2025-09-11T00:33:59.612101334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.534154492s" Sep 11 00:33:59.612173 containerd[1595]: time="2025-09-11T00:33:59.612143453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:33:59.614177 containerd[1595]: time="2025-09-11T00:33:59.614141475Z" level=info msg="CreateContainer within sandbox \"c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:33:59.627538 containerd[1595]: time="2025-09-11T00:33:59.627506082Z" level=info msg="Container 5bdbd5f1f1baa37d47fe03f8b5e448d9cae37bd60107056b1ba3eb3a1eab150f: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:59.643996 containerd[1595]: time="2025-09-11T00:33:59.643954421Z" level=info msg="CreateContainer within sandbox \"c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5bdbd5f1f1baa37d47fe03f8b5e448d9cae37bd60107056b1ba3eb3a1eab150f\"" Sep 11 00:33:59.644557 containerd[1595]: time="2025-09-11T00:33:59.644520984Z" level=info msg="StartContainer for \"5bdbd5f1f1baa37d47fe03f8b5e448d9cae37bd60107056b1ba3eb3a1eab150f\"" Sep 11 00:33:59.646044 containerd[1595]: time="2025-09-11T00:33:59.646015701Z" level=info msg="connecting to shim 5bdbd5f1f1baa37d47fe03f8b5e448d9cae37bd60107056b1ba3eb3a1eab150f" address="unix:///run/containerd/s/5917540047d9d5e3616c2fb5678ca7e3e08310e04b0d54ee83ab70e71cf83c1f" protocol=ttrpc version=3 Sep 11 00:33:59.678439 systemd[1]: Started cri-containerd-5bdbd5f1f1baa37d47fe03f8b5e448d9cae37bd60107056b1ba3eb3a1eab150f.scope - libcontainer container 5bdbd5f1f1baa37d47fe03f8b5e448d9cae37bd60107056b1ba3eb3a1eab150f. Sep 11 00:33:59.827652 containerd[1595]: time="2025-09-11T00:33:59.827604674Z" level=info msg="StartContainer for \"5bdbd5f1f1baa37d47fe03f8b5e448d9cae37bd60107056b1ba3eb3a1eab150f\" returns successfully" Sep 11 00:33:59.828710 containerd[1595]: time="2025-09-11T00:33:59.828679522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:34:01.842884 containerd[1595]: time="2025-09-11T00:34:01.842843162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:01.844137 containerd[1595]: time="2025-09-11T00:34:01.844100373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:34:01.845683 containerd[1595]: time="2025-09-11T00:34:01.845626298Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:01.847612 containerd[1595]: time="2025-09-11T00:34:01.847568886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:01.848231 containerd[1595]: time="2025-09-11T00:34:01.848192145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.01948396s" Sep 11 00:34:01.848231 containerd[1595]: time="2025-09-11T00:34:01.848225478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:34:01.851387 containerd[1595]: time="2025-09-11T00:34:01.850914557Z" level=info msg="CreateContainer within sandbox \"c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:34:01.860036 containerd[1595]: time="2025-09-11T00:34:01.859993143Z" level=info msg="Container 6de7d9063e87b2ac9468ddba96abeb8c793f6e3054501d4e6d90be5382b30652: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:34:01.878841 containerd[1595]: time="2025-09-11T00:34:01.878797870Z" level=info msg="CreateContainer within sandbox \"c491b61c46bbea33ed44609a3aff271e94d63f3a8441414d6b8e2a5c78b51a0c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6de7d9063e87b2ac9468ddba96abeb8c793f6e3054501d4e6d90be5382b30652\"" Sep 11 00:34:01.879358 containerd[1595]: time="2025-09-11T00:34:01.879266119Z" level=info msg="StartContainer for \"6de7d9063e87b2ac9468ddba96abeb8c793f6e3054501d4e6d90be5382b30652\"" Sep 11 00:34:01.880773 containerd[1595]: time="2025-09-11T00:34:01.880739505Z" level=info msg="connecting to shim 6de7d9063e87b2ac9468ddba96abeb8c793f6e3054501d4e6d90be5382b30652" address="unix:///run/containerd/s/5917540047d9d5e3616c2fb5678ca7e3e08310e04b0d54ee83ab70e71cf83c1f" protocol=ttrpc version=3 Sep 11 00:34:01.915446 systemd[1]: Started cri-containerd-6de7d9063e87b2ac9468ddba96abeb8c793f6e3054501d4e6d90be5382b30652.scope - libcontainer container 6de7d9063e87b2ac9468ddba96abeb8c793f6e3054501d4e6d90be5382b30652. Sep 11 00:34:01.960447 containerd[1595]: time="2025-09-11T00:34:01.960378323Z" level=info msg="StartContainer for \"6de7d9063e87b2ac9468ddba96abeb8c793f6e3054501d4e6d90be5382b30652\" returns successfully" Sep 11 00:34:02.591192 kubelet[2740]: I0911 00:34:02.591157 2740 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:34:02.591192 kubelet[2740]: I0911 00:34:02.591190 2740 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:34:02.718732 kubelet[2740]: I0911 00:34:02.718393 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:34:02.840275 containerd[1595]: time="2025-09-11T00:34:02.840236755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046\" id:\"106c3a6f976b1d674d04e3764826a431af0e1080a7b9c3f72c6719267731b735\" pid:5626 exited_at:{seconds:1757550842 nanos:838782124}" Sep 11 00:34:02.851560 kubelet[2740]: I0911 00:34:02.851348 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8sqkl" podStartSLOduration=29.470081555 podStartE2EDuration="37.851331525s" podCreationTimestamp="2025-09-11 00:33:25 +0000 UTC" firstStartedPulling="2025-09-11 00:33:53.468535047 +0000 UTC m=+46.038242085" lastFinishedPulling="2025-09-11 00:34:01.849785017 +0000 UTC m=+54.419492055" observedRunningTime="2025-09-11 00:34:02.728131245 +0000 UTC m=+55.297838283" watchObservedRunningTime="2025-09-11 00:34:02.851331525 +0000 UTC m=+55.421038563" Sep 11 00:34:02.922036 containerd[1595]: time="2025-09-11T00:34:02.921993850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046\" id:\"5734106bb7a208dfae3298a4204a9373af4ecb8336c66f5642837cf7dd8c0960\" pid:5649 exited_at:{seconds:1757550842 nanos:921629626}" Sep 11 00:34:03.465647 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:52044.service - OpenSSH per-connection server daemon (10.0.0.1:52044). Sep 11 00:34:03.562639 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 52044 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:03.564263 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:03.568703 systemd-logind[1583]: New session 13 of user core. Sep 11 00:34:03.578445 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:34:03.704352 sshd[5666]: Connection closed by 10.0.0.1 port 52044 Sep 11 00:34:03.704654 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:03.708938 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:52044.service: Deactivated successfully. Sep 11 00:34:03.711049 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:34:03.711852 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:34:03.713048 systemd-logind[1583]: Removed session 13. Sep 11 00:34:08.720192 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:52056.service - OpenSSH per-connection server daemon (10.0.0.1:52056). Sep 11 00:34:08.776023 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 52056 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:08.777706 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:08.781755 systemd-logind[1583]: New session 14 of user core. Sep 11 00:34:08.791421 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:34:08.927484 sshd[5695]: Connection closed by 10.0.0.1 port 52056 Sep 11 00:34:08.927799 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:08.933137 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:52056.service: Deactivated successfully. Sep 11 00:34:08.935234 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:34:08.936175 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:34:08.937455 systemd-logind[1583]: Removed session 14. Sep 11 00:34:13.946962 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:36136.service - OpenSSH per-connection server daemon (10.0.0.1:36136). Sep 11 00:34:14.007550 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 36136 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:14.009121 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:14.013329 systemd-logind[1583]: New session 15 of user core. Sep 11 00:34:14.020482 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:34:14.136429 sshd[5712]: Connection closed by 10.0.0.1 port 36136 Sep 11 00:34:14.136781 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:14.141436 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:36136.service: Deactivated successfully. Sep 11 00:34:14.143548 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:34:14.144408 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:34:14.145587 systemd-logind[1583]: Removed session 15. Sep 11 00:34:14.597200 containerd[1595]: time="2025-09-11T00:34:14.597118005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e\" id:\"d77a0ac88cad278c3753f2fd78fa9dd56385f1bb494d70b16cb274851efea760\" pid:5737 exited_at:{seconds:1757550854 nanos:596796983}" Sep 11 00:34:18.645151 containerd[1595]: time="2025-09-11T00:34:18.645087211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88bc1caad5e6cfe42cce20d5630367f15be6bd6521e4ef51ea2e385d10d1f88e\" id:\"616ded947de43eaba9e70962bbb5919636ba4c07b900a02f57023d16f329222d\" pid:5769 exited_at:{seconds:1757550858 nanos:644031236}" Sep 11 00:34:19.151369 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:36140.service - OpenSSH per-connection server daemon (10.0.0.1:36140). Sep 11 00:34:19.222167 sshd[5781]: Accepted publickey for core from 10.0.0.1 port 36140 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:19.224178 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:19.230287 systemd-logind[1583]: New session 16 of user core. Sep 11 00:34:19.242446 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:34:19.470959 sshd[5783]: Connection closed by 10.0.0.1 port 36140 Sep 11 00:34:19.471765 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:19.481748 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:36140.service: Deactivated successfully. Sep 11 00:34:19.483551 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:34:19.484500 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:34:19.487994 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:36144.service - OpenSSH per-connection server daemon (10.0.0.1:36144). Sep 11 00:34:19.490026 systemd-logind[1583]: Removed session 16. Sep 11 00:34:19.514436 kubelet[2740]: E0911 00:34:19.514396 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:34:19.533594 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 36144 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:19.535056 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:19.539985 systemd-logind[1583]: New session 17 of user core. Sep 11 00:34:19.553580 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:34:19.778140 sshd[5801]: Connection closed by 10.0.0.1 port 36144 Sep 11 00:34:19.779201 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:19.792117 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:36144.service: Deactivated successfully. Sep 11 00:34:19.794132 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:34:19.795157 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:34:19.798163 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:36160.service - OpenSSH per-connection server daemon (10.0.0.1:36160). Sep 11 00:34:19.798898 systemd-logind[1583]: Removed session 17. Sep 11 00:34:19.856302 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 36160 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:19.857680 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:19.862364 systemd-logind[1583]: New session 18 of user core. Sep 11 00:34:19.872455 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:34:20.512746 kubelet[2740]: E0911 00:34:20.512393 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:34:21.663451 sshd[5815]: Connection closed by 10.0.0.1 port 36160 Sep 11 00:34:21.664695 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:21.676132 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:36160.service: Deactivated successfully. Sep 11 00:34:21.678507 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:34:21.678764 systemd[1]: session-18.scope: Consumed 578ms CPU time, 79.1M memory peak. Sep 11 00:34:21.679511 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:34:21.682708 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:38040.service - OpenSSH per-connection server daemon (10.0.0.1:38040). Sep 11 00:34:21.684089 systemd-logind[1583]: Removed session 18. Sep 11 00:34:21.735094 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 38040 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:21.736845 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:21.741491 systemd-logind[1583]: New session 19 of user core. Sep 11 00:34:21.759452 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:34:22.333588 sshd[5836]: Connection closed by 10.0.0.1 port 38040 Sep 11 00:34:22.335391 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:22.344381 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:38040.service: Deactivated successfully. Sep 11 00:34:22.346453 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:34:22.347237 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:34:22.350539 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:38050.service - OpenSSH per-connection server daemon (10.0.0.1:38050). Sep 11 00:34:22.352016 systemd-logind[1583]: Removed session 19. Sep 11 00:34:22.395324 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 38050 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:22.396669 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:22.401244 systemd-logind[1583]: New session 20 of user core. Sep 11 00:34:22.408439 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:34:22.545682 sshd[5850]: Connection closed by 10.0.0.1 port 38050 Sep 11 00:34:22.548518 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:22.552758 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:34:22.553834 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:38050.service: Deactivated successfully. Sep 11 00:34:22.556830 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:34:22.561641 systemd-logind[1583]: Removed session 20. Sep 11 00:34:25.513111 kubelet[2740]: E0911 00:34:25.512776 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:34:27.561798 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:38060.service - OpenSSH per-connection server daemon (10.0.0.1:38060). Sep 11 00:34:27.609722 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 38060 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:27.611502 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:27.615767 systemd-logind[1583]: New session 21 of user core. Sep 11 00:34:27.622446 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:34:27.740027 sshd[5869]: Connection closed by 10.0.0.1 port 38060 Sep 11 00:34:27.741495 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:27.745305 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:38060.service: Deactivated successfully. Sep 11 00:34:27.748070 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:34:27.749801 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:34:27.753075 systemd-logind[1583]: Removed session 21. Sep 11 00:34:28.260284 containerd[1595]: time="2025-09-11T00:34:28.260219850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd854075bea915909ec80035715de44d8b136105066b1750cf0330f1d706b179\" id:\"5e7fb90f6b954d5744efb481d7362c1a2973afb82be0dc886147ea42ee934263\" pid:5893 exited_at:{seconds:1757550868 nanos:233368747}" Sep 11 00:34:29.246404 containerd[1595]: time="2025-09-11T00:34:29.246361458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046\" id:\"6146eda2d7d28be664aab6e7ab1e99bc277cac1ea93ee4452e7e87b1e16d56e8\" pid:5918 exited_at:{seconds:1757550869 nanos:246019812}" Sep 11 00:34:32.749587 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:53688.service - OpenSSH per-connection server daemon (10.0.0.1:53688). Sep 11 00:34:32.800709 sshd[5949]: Accepted publickey for core from 10.0.0.1 port 53688 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:32.802410 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:32.807193 systemd-logind[1583]: New session 22 of user core. Sep 11 00:34:32.821457 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:34:32.823058 containerd[1595]: time="2025-09-11T00:34:32.823016957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0160f8d419040f331e395c809436801617b6013a72d495ed18e25fa3c7fd7046\" id:\"b650f2819c245831b73b1e18ad026c7da9008e4ddb71ab6e24d16c104500ce97\" pid:5942 exited_at:{seconds:1757550872 nanos:822645465}" Sep 11 00:34:32.931987 sshd[5956]: Connection closed by 10.0.0.1 port 53688 Sep 11 00:34:32.932307 sshd-session[5949]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:32.936782 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:53688.service: Deactivated successfully. Sep 11 00:34:32.938873 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:34:32.939760 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:34:32.940966 systemd-logind[1583]: Removed session 22. Sep 11 00:34:37.943142 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:53698.service - OpenSSH per-connection server daemon (10.0.0.1:53698). Sep 11 00:34:37.990303 sshd[5979]: Accepted publickey for core from 10.0.0.1 port 53698 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:37.991756 sshd-session[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:37.995974 systemd-logind[1583]: New session 23 of user core. Sep 11 00:34:38.005469 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:34:38.114990 sshd[5981]: Connection closed by 10.0.0.1 port 53698 Sep 11 00:34:38.115348 sshd-session[5979]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:38.119111 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:53698.service: Deactivated successfully. Sep 11 00:34:38.121240 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:34:38.124054 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:34:38.125122 systemd-logind[1583]: Removed session 23. Sep 11 00:34:42.514343 kubelet[2740]: E0911 00:34:42.513656 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:34:43.130227 systemd[1]: Started sshd@23-10.0.0.147:22-10.0.0.1:48550.service - OpenSSH per-connection server daemon (10.0.0.1:48550). Sep 11 00:34:43.193678 sshd[5995]: Accepted publickey for core from 10.0.0.1 port 48550 ssh2: RSA SHA256:wcLNcLfUgqd1DVBi2LBWyU/YmT9oxX+zDIoKpfJUZ0U Sep 11 00:34:43.195550 sshd-session[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:43.200557 systemd-logind[1583]: New session 24 of user core. Sep 11 00:34:43.208488 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:34:43.323601 sshd[5997]: Connection closed by 10.0.0.1 port 48550 Sep 11 00:34:43.323941 sshd-session[5995]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:43.327713 systemd[1]: sshd@23-10.0.0.147:22-10.0.0.1:48550.service: Deactivated successfully. Sep 11 00:34:43.330595 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:34:43.333229 systemd-logind[1583]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:34:43.335216 systemd-logind[1583]: Removed session 24.