Sep 10 05:18:20.816734 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 03:32:41 -00 2025 Sep 10 05:18:20.816779 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 05:18:20.816791 kernel: BIOS-provided physical RAM map: Sep 10 05:18:20.816798 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 05:18:20.816804 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 05:18:20.816810 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 05:18:20.816818 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 05:18:20.816824 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 05:18:20.816831 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 10 05:18:20.816840 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 10 05:18:20.816847 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 10 05:18:20.816853 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 10 05:18:20.816859 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 10 05:18:20.816866 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 10 05:18:20.816874 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 10 05:18:20.816883 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 05:18:20.816890 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 10 05:18:20.816897 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 10 05:18:20.816904 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 10 05:18:20.816911 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 10 05:18:20.816927 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 10 05:18:20.816943 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 05:18:20.816960 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 10 05:18:20.816975 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 05:18:20.816982 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 10 05:18:20.816997 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 05:18:20.817004 kernel: NX (Execute Disable) protection: active Sep 10 05:18:20.817011 kernel: APIC: Static calls initialized Sep 10 05:18:20.817018 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 10 05:18:20.817025 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 10 05:18:20.817041 kernel: extended physical RAM map: Sep 10 05:18:20.817049 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 05:18:20.817064 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 05:18:20.817087 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 05:18:20.817095 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 05:18:20.817111 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 05:18:20.817121 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 10 05:18:20.817128 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 10 05:18:20.817135 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 10 05:18:20.817142 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 10 05:18:20.817152 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 10 05:18:20.817160 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 10 05:18:20.817169 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 10 05:18:20.817176 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 10 05:18:20.817184 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 10 05:18:20.817191 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 10 05:18:20.817198 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 10 05:18:20.817205 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 05:18:20.817213 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 10 05:18:20.817220 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 10 05:18:20.817227 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 10 05:18:20.817234 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 10 05:18:20.817244 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 10 05:18:20.817251 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 05:18:20.817258 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 10 05:18:20.817265 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 05:18:20.817273 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 10 05:18:20.817280 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 05:18:20.817287 kernel: efi: EFI v2.7 by EDK II Sep 10 05:18:20.817295 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 10 05:18:20.817302 kernel: random: crng init done Sep 10 05:18:20.817310 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 10 05:18:20.817317 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 10 05:18:20.817327 kernel: secureboot: Secure boot disabled Sep 10 05:18:20.817334 kernel: SMBIOS 2.8 present. Sep 10 05:18:20.817341 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 10 05:18:20.817348 kernel: DMI: Memory slots populated: 1/1 Sep 10 05:18:20.817355 kernel: Hypervisor detected: KVM Sep 10 05:18:20.817362 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 05:18:20.817370 kernel: kvm-clock: using sched offset of 3639838761 cycles Sep 10 05:18:20.817377 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 05:18:20.817385 kernel: tsc: Detected 2794.748 MHz processor Sep 10 05:18:20.817393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 05:18:20.817400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 05:18:20.817409 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 10 05:18:20.817417 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 10 05:18:20.817424 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 05:18:20.817432 kernel: Using GB pages for direct mapping Sep 10 05:18:20.817440 kernel: ACPI: Early table checksum verification disabled Sep 10 05:18:20.817447 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 10 05:18:20.817455 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 10 05:18:20.817463 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:18:20.817470 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:18:20.817480 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 10 05:18:20.817487 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:18:20.817495 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:18:20.817502 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:18:20.817510 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 05:18:20.817517 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 05:18:20.817524 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 10 05:18:20.817532 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 10 05:18:20.817539 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 10 05:18:20.817549 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 10 05:18:20.817556 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 10 05:18:20.817564 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 10 05:18:20.817571 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 10 05:18:20.817579 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 10 05:18:20.817586 kernel: No NUMA configuration found Sep 10 05:18:20.817593 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 10 05:18:20.817601 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 10 05:18:20.817608 kernel: Zone ranges: Sep 10 05:18:20.817618 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 05:18:20.817625 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 10 05:18:20.817633 kernel: Normal empty Sep 10 05:18:20.817640 kernel: Device empty Sep 10 05:18:20.817647 kernel: Movable zone start for each node Sep 10 05:18:20.817654 kernel: Early memory node ranges Sep 10 05:18:20.817662 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 10 05:18:20.817669 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 10 05:18:20.817676 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 10 05:18:20.817686 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 10 05:18:20.817693 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 10 05:18:20.817700 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 10 05:18:20.817708 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 10 05:18:20.817715 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 10 05:18:20.817722 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 10 05:18:20.817730 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 05:18:20.817743 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 10 05:18:20.817760 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 10 05:18:20.817781 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 05:18:20.817788 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 10 05:18:20.817796 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 10 05:18:20.817807 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 10 05:18:20.817815 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 10 05:18:20.817823 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 10 05:18:20.817831 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 05:18:20.817838 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 05:18:20.817848 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 05:18:20.817856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 05:18:20.817863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 05:18:20.817871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 05:18:20.817879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 05:18:20.817886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 05:18:20.817894 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 05:18:20.817901 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 05:18:20.817909 kernel: TSC deadline timer available Sep 10 05:18:20.817916 kernel: CPU topo: Max. logical packages: 1 Sep 10 05:18:20.817926 kernel: CPU topo: Max. logical dies: 1 Sep 10 05:18:20.817934 kernel: CPU topo: Max. dies per package: 1 Sep 10 05:18:20.817941 kernel: CPU topo: Max. threads per core: 1 Sep 10 05:18:20.817949 kernel: CPU topo: Num. cores per package: 4 Sep 10 05:18:20.817957 kernel: CPU topo: Num. threads per package: 4 Sep 10 05:18:20.817964 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 10 05:18:20.817972 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 05:18:20.817980 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 05:18:20.817987 kernel: kvm-guest: setup PV sched yield Sep 10 05:18:20.817997 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 10 05:18:20.818005 kernel: Booting paravirtualized kernel on KVM Sep 10 05:18:20.818012 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 05:18:20.818020 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 05:18:20.818028 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 10 05:18:20.818036 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 10 05:18:20.818044 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 05:18:20.818051 kernel: kvm-guest: PV spinlocks enabled Sep 10 05:18:20.818063 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 05:18:20.818086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 05:18:20.818107 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 05:18:20.818125 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 05:18:20.818133 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 05:18:20.818141 kernel: Fallback order for Node 0: 0 Sep 10 05:18:20.818148 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 10 05:18:20.818156 kernel: Policy zone: DMA32 Sep 10 05:18:20.818164 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 05:18:20.818174 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 05:18:20.818182 kernel: ftrace: allocating 40102 entries in 157 pages Sep 10 05:18:20.818189 kernel: ftrace: allocated 157 pages with 5 groups Sep 10 05:18:20.818197 kernel: Dynamic Preempt: voluntary Sep 10 05:18:20.818204 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 05:18:20.818213 kernel: rcu: RCU event tracing is enabled. Sep 10 05:18:20.818220 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 05:18:20.818228 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 05:18:20.818236 kernel: Rude variant of Tasks RCU enabled. Sep 10 05:18:20.818244 kernel: Tracing variant of Tasks RCU enabled. Sep 10 05:18:20.818254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 05:18:20.818262 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 05:18:20.818270 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 05:18:20.818278 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 05:18:20.818286 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 05:18:20.818294 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 05:18:20.818301 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 05:18:20.818309 kernel: Console: colour dummy device 80x25 Sep 10 05:18:20.818319 kernel: printk: legacy console [ttyS0] enabled Sep 10 05:18:20.818327 kernel: ACPI: Core revision 20240827 Sep 10 05:18:20.818335 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 05:18:20.818342 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 05:18:20.818350 kernel: x2apic enabled Sep 10 05:18:20.818358 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 05:18:20.818365 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 05:18:20.818383 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 05:18:20.818399 kernel: kvm-guest: setup PV IPIs Sep 10 05:18:20.818407 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 05:18:20.818418 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 10 05:18:20.818426 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 05:18:20.818434 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 05:18:20.818441 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 05:18:20.818449 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 05:18:20.818457 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 05:18:20.818465 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 05:18:20.818473 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 05:18:20.818483 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 05:18:20.818491 kernel: active return thunk: retbleed_return_thunk Sep 10 05:18:20.818498 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 05:18:20.818506 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 05:18:20.818514 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 05:18:20.818522 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 05:18:20.818530 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 05:18:20.818538 kernel: active return thunk: srso_return_thunk Sep 10 05:18:20.818546 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 05:18:20.818556 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 05:18:20.818564 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 05:18:20.818572 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 05:18:20.818580 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 05:18:20.818588 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 05:18:20.818595 kernel: Freeing SMP alternatives memory: 32K Sep 10 05:18:20.818603 kernel: pid_max: default: 32768 minimum: 301 Sep 10 05:18:20.818611 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 05:18:20.818619 kernel: landlock: Up and running. Sep 10 05:18:20.818628 kernel: SELinux: Initializing. Sep 10 05:18:20.818636 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 05:18:20.818643 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 05:18:20.818651 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 05:18:20.818659 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 05:18:20.818667 kernel: ... version: 0 Sep 10 05:18:20.818674 kernel: ... bit width: 48 Sep 10 05:18:20.818682 kernel: ... generic registers: 6 Sep 10 05:18:20.818689 kernel: ... value mask: 0000ffffffffffff Sep 10 05:18:20.818699 kernel: ... max period: 00007fffffffffff Sep 10 05:18:20.818707 kernel: ... fixed-purpose events: 0 Sep 10 05:18:20.818715 kernel: ... event mask: 000000000000003f Sep 10 05:18:20.818722 kernel: signal: max sigframe size: 1776 Sep 10 05:18:20.818730 kernel: rcu: Hierarchical SRCU implementation. Sep 10 05:18:20.818745 kernel: rcu: Max phase no-delay instances is 400. Sep 10 05:18:20.818753 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 05:18:20.818761 kernel: smp: Bringing up secondary CPUs ... Sep 10 05:18:20.818817 kernel: smpboot: x86: Booting SMP configuration: Sep 10 05:18:20.818827 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 05:18:20.818835 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 05:18:20.818842 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 05:18:20.818851 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54068K init, 2900K bss, 137200K reserved, 0K cma-reserved) Sep 10 05:18:20.818858 kernel: devtmpfs: initialized Sep 10 05:18:20.818866 kernel: x86/mm: Memory block size: 128MB Sep 10 05:18:20.818874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 10 05:18:20.818882 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 10 05:18:20.818890 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 10 05:18:20.818900 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 10 05:18:20.818908 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 10 05:18:20.818915 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 10 05:18:20.818923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 05:18:20.818931 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 05:18:20.818939 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 05:18:20.818947 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 05:18:20.818954 kernel: audit: initializing netlink subsys (disabled) Sep 10 05:18:20.818964 kernel: audit: type=2000 audit(1757481498.425:1): state=initialized audit_enabled=0 res=1 Sep 10 05:18:20.818972 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 05:18:20.818979 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 05:18:20.818987 kernel: cpuidle: using governor menu Sep 10 05:18:20.818995 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 05:18:20.819002 kernel: dca service started, version 1.12.1 Sep 10 05:18:20.819010 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 10 05:18:20.819018 kernel: PCI: Using configuration type 1 for base access Sep 10 05:18:20.819026 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 05:18:20.819035 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 05:18:20.819043 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 05:18:20.819051 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 05:18:20.819058 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 05:18:20.819066 kernel: ACPI: Added _OSI(Module Device) Sep 10 05:18:20.819074 kernel: ACPI: Added _OSI(Processor Device) Sep 10 05:18:20.819081 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 05:18:20.819089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 05:18:20.819096 kernel: ACPI: Interpreter enabled Sep 10 05:18:20.819106 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 05:18:20.819114 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 05:18:20.819122 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 05:18:20.819130 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 05:18:20.819137 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 05:18:20.819145 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 05:18:20.819322 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 05:18:20.819452 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 05:18:20.819607 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 05:18:20.819619 kernel: PCI host bridge to bus 0000:00 Sep 10 05:18:20.819865 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 05:18:20.819979 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 05:18:20.820087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 05:18:20.820207 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 10 05:18:20.820315 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 10 05:18:20.820425 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 10 05:18:20.820532 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 05:18:20.820749 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 10 05:18:20.820923 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 10 05:18:20.821045 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 10 05:18:20.821162 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 10 05:18:20.821283 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 10 05:18:20.821400 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 05:18:20.821528 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 05:18:20.821647 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 10 05:18:20.821788 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 10 05:18:20.821910 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 10 05:18:20.822037 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 10 05:18:20.822160 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 10 05:18:20.822277 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 10 05:18:20.822393 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 10 05:18:20.822539 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 10 05:18:20.822657 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 10 05:18:20.822821 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 10 05:18:20.822944 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 10 05:18:20.823066 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 10 05:18:20.823191 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 10 05:18:20.823309 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 05:18:20.823435 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 10 05:18:20.823553 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 10 05:18:20.823670 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 10 05:18:20.823819 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 10 05:18:20.823944 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 10 05:18:20.823955 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 05:18:20.823963 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 05:18:20.823971 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 05:18:20.823979 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 05:18:20.823987 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 05:18:20.823995 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 05:18:20.824002 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 05:18:20.824013 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 05:18:20.824020 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 05:18:20.824028 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 05:18:20.824036 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 05:18:20.824044 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 05:18:20.824051 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 05:18:20.824059 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 05:18:20.824067 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 05:18:20.824075 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 05:18:20.824085 kernel: iommu: Default domain type: Translated Sep 10 05:18:20.824092 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 05:18:20.824100 kernel: efivars: Registered efivars operations Sep 10 05:18:20.824109 kernel: PCI: Using ACPI for IRQ routing Sep 10 05:18:20.824118 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 05:18:20.824127 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 10 05:18:20.824136 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 10 05:18:20.824144 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 10 05:18:20.824151 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 10 05:18:20.824161 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 10 05:18:20.824169 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 10 05:18:20.824177 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 10 05:18:20.824184 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 10 05:18:20.824302 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 05:18:20.824499 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 05:18:20.824622 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 05:18:20.824633 kernel: vgaarb: loaded Sep 10 05:18:20.824645 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 05:18:20.824653 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 05:18:20.824661 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 05:18:20.824669 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 05:18:20.824677 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 05:18:20.824685 kernel: pnp: PnP ACPI init Sep 10 05:18:20.824863 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 10 05:18:20.824879 kernel: pnp: PnP ACPI: found 6 devices Sep 10 05:18:20.824889 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 05:18:20.824897 kernel: NET: Registered PF_INET protocol family Sep 10 05:18:20.824906 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 05:18:20.824914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 05:18:20.824922 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 05:18:20.824930 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 05:18:20.824938 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 05:18:20.824947 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 05:18:20.824957 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 05:18:20.824965 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 05:18:20.824973 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 05:18:20.824981 kernel: NET: Registered PF_XDP protocol family Sep 10 05:18:20.825100 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 10 05:18:20.825220 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 10 05:18:20.825329 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 05:18:20.825435 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 05:18:20.825551 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 05:18:20.825661 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 10 05:18:20.825796 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 10 05:18:20.825904 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 10 05:18:20.825915 kernel: PCI: CLS 0 bytes, default 64 Sep 10 05:18:20.825923 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 10 05:18:20.825932 kernel: Initialise system trusted keyrings Sep 10 05:18:20.825943 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 05:18:20.825951 kernel: Key type asymmetric registered Sep 10 05:18:20.825959 kernel: Asymmetric key parser 'x509' registered Sep 10 05:18:20.825967 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 05:18:20.825976 kernel: io scheduler mq-deadline registered Sep 10 05:18:20.825984 kernel: io scheduler kyber registered Sep 10 05:18:20.825992 kernel: io scheduler bfq registered Sep 10 05:18:20.826001 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 05:18:20.826011 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 05:18:20.826019 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 05:18:20.826028 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 05:18:20.826036 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 05:18:20.826044 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 05:18:20.826052 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 05:18:20.826060 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 05:18:20.826069 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 05:18:20.826077 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 05:18:20.826329 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 05:18:20.826442 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 05:18:20.826551 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T05:18:20 UTC (1757481500) Sep 10 05:18:20.826660 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 10 05:18:20.826671 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 05:18:20.826679 kernel: efifb: probing for efifb Sep 10 05:18:20.826691 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 10 05:18:20.826699 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 10 05:18:20.826709 kernel: efifb: scrolling: redraw Sep 10 05:18:20.826717 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 10 05:18:20.826725 kernel: Console: switching to colour frame buffer device 160x50 Sep 10 05:18:20.826733 kernel: fb0: EFI VGA frame buffer device Sep 10 05:18:20.826750 kernel: pstore: Using crash dump compression: deflate Sep 10 05:18:20.826758 kernel: pstore: Registered efi_pstore as persistent store backend Sep 10 05:18:20.826779 kernel: NET: Registered PF_INET6 protocol family Sep 10 05:18:20.826787 kernel: Segment Routing with IPv6 Sep 10 05:18:20.826795 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 05:18:20.826806 kernel: NET: Registered PF_PACKET protocol family Sep 10 05:18:20.826814 kernel: Key type dns_resolver registered Sep 10 05:18:20.826822 kernel: IPI shorthand broadcast: enabled Sep 10 05:18:20.826830 kernel: sched_clock: Marking stable (3110003572, 151257484)->(3276996729, -15735673) Sep 10 05:18:20.826838 kernel: registered taskstats version 1 Sep 10 05:18:20.826846 kernel: Loading compiled-in X.509 certificates Sep 10 05:18:20.826855 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f6c45bc801b894d4dac30a723f1f683ea8f7e3ae' Sep 10 05:18:20.826863 kernel: Demotion targets for Node 0: null Sep 10 05:18:20.826871 kernel: Key type .fscrypt registered Sep 10 05:18:20.826881 kernel: Key type fscrypt-provisioning registered Sep 10 05:18:20.826889 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 05:18:20.826897 kernel: ima: Allocated hash algorithm: sha1 Sep 10 05:18:20.826905 kernel: ima: No architecture policies found Sep 10 05:18:20.826913 kernel: clk: Disabling unused clocks Sep 10 05:18:20.826921 kernel: Warning: unable to open an initial console. Sep 10 05:18:20.826930 kernel: Freeing unused kernel image (initmem) memory: 54068K Sep 10 05:18:20.826938 kernel: Write protecting the kernel read-only data: 24576k Sep 10 05:18:20.826948 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 10 05:18:20.826956 kernel: Run /init as init process Sep 10 05:18:20.826964 kernel: with arguments: Sep 10 05:18:20.826972 kernel: /init Sep 10 05:18:20.826980 kernel: with environment: Sep 10 05:18:20.826988 kernel: HOME=/ Sep 10 05:18:20.826996 kernel: TERM=linux Sep 10 05:18:20.827004 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 05:18:20.827013 systemd[1]: Successfully made /usr/ read-only. Sep 10 05:18:20.827026 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 05:18:20.827035 systemd[1]: Detected virtualization kvm. Sep 10 05:18:20.827043 systemd[1]: Detected architecture x86-64. Sep 10 05:18:20.827052 systemd[1]: Running in initrd. Sep 10 05:18:20.827060 systemd[1]: No hostname configured, using default hostname. Sep 10 05:18:20.827069 systemd[1]: Hostname set to . Sep 10 05:18:20.827077 systemd[1]: Initializing machine ID from VM UUID. Sep 10 05:18:20.827086 systemd[1]: Queued start job for default target initrd.target. Sep 10 05:18:20.827096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 05:18:20.827105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 05:18:20.827117 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 05:18:20.827126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 05:18:20.827137 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 05:18:20.827146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 05:18:20.827158 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 05:18:20.827166 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 05:18:20.827175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 05:18:20.827183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 05:18:20.827192 systemd[1]: Reached target paths.target - Path Units. Sep 10 05:18:20.827200 systemd[1]: Reached target slices.target - Slice Units. Sep 10 05:18:20.827209 systemd[1]: Reached target swap.target - Swaps. Sep 10 05:18:20.827218 systemd[1]: Reached target timers.target - Timer Units. Sep 10 05:18:20.827226 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 05:18:20.827236 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 05:18:20.827245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 05:18:20.827253 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 05:18:20.827262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 05:18:20.827270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 05:18:20.827279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 05:18:20.827287 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 05:18:20.827296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 05:18:20.827306 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 05:18:20.827315 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 05:18:20.827324 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 05:18:20.827332 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 05:18:20.827341 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 05:18:20.827349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 05:18:20.827358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:18:20.827366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 05:18:20.827378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 05:18:20.827386 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 05:18:20.827415 systemd-journald[219]: Collecting audit messages is disabled. Sep 10 05:18:20.827437 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 05:18:20.827447 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 05:18:20.827456 systemd-journald[219]: Journal started Sep 10 05:18:20.827475 systemd-journald[219]: Runtime Journal (/run/log/journal/c67de5b1a76048279c632ac32270029e) is 6M, max 48.4M, 42.4M free. Sep 10 05:18:20.818470 systemd-modules-load[220]: Inserted module 'overlay' Sep 10 05:18:20.830789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 05:18:20.833797 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 05:18:20.838875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:18:20.843941 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 05:18:20.849421 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 05:18:20.848683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 05:18:20.851047 systemd-modules-load[220]: Inserted module 'br_netfilter' Sep 10 05:18:20.854093 kernel: Bridge firewalling registered Sep 10 05:18:20.851270 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 05:18:20.852530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 05:18:20.856519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 05:18:20.870629 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 05:18:20.873641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:18:20.875837 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 05:18:20.878036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 05:18:20.879863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 05:18:20.882466 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 05:18:20.903704 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 05:18:20.923190 systemd-resolved[258]: Positive Trust Anchors: Sep 10 05:18:20.923204 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 05:18:20.923234 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 05:18:20.925669 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 10 05:18:20.926969 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 05:18:20.931796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 05:18:21.020810 kernel: SCSI subsystem initialized Sep 10 05:18:21.029795 kernel: Loading iSCSI transport class v2.0-870. Sep 10 05:18:21.039796 kernel: iscsi: registered transport (tcp) Sep 10 05:18:21.062863 kernel: iscsi: registered transport (qla4xxx) Sep 10 05:18:21.062895 kernel: QLogic iSCSI HBA Driver Sep 10 05:18:21.083364 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 05:18:21.107706 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 05:18:21.110531 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 05:18:21.165307 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 05:18:21.167895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 05:18:21.232807 kernel: raid6: avx2x4 gen() 26642 MB/s Sep 10 05:18:21.249788 kernel: raid6: avx2x2 gen() 30402 MB/s Sep 10 05:18:21.266951 kernel: raid6: avx2x1 gen() 25804 MB/s Sep 10 05:18:21.266968 kernel: raid6: using algorithm avx2x2 gen() 30402 MB/s Sep 10 05:18:21.284980 kernel: raid6: .... xor() 19755 MB/s, rmw enabled Sep 10 05:18:21.284994 kernel: raid6: using avx2x2 recovery algorithm Sep 10 05:18:21.320800 kernel: xor: automatically using best checksumming function avx Sep 10 05:18:21.494821 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 05:18:21.504017 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 05:18:21.507842 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 05:18:21.544535 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 10 05:18:21.550059 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 05:18:21.553213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 05:18:21.578421 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 10 05:18:21.608327 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 05:18:21.611724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 05:18:21.684760 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 05:18:21.688681 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 05:18:21.719790 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 05:18:21.730799 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 05:18:21.738834 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 05:18:21.745789 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 10 05:18:21.760013 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 05:18:21.760073 kernel: GPT:9289727 != 19775487 Sep 10 05:18:21.760085 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 05:18:21.760112 kernel: GPT:9289727 != 19775487 Sep 10 05:18:21.760123 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 05:18:21.760133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:18:21.761169 kernel: AES CTR mode by8 optimization enabled Sep 10 05:18:21.762431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 05:18:21.762502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:18:21.768826 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:18:21.771855 kernel: libata version 3.00 loaded. Sep 10 05:18:21.774622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:18:21.784628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 05:18:21.784750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:18:21.792962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:18:21.796874 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 05:18:21.797066 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 05:18:21.808780 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 10 05:18:21.808954 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 10 05:18:21.809095 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 05:18:21.812547 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 05:18:21.815564 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 05:18:21.823790 kernel: scsi host0: ahci Sep 10 05:18:21.824043 kernel: scsi host1: ahci Sep 10 05:18:21.825009 kernel: scsi host2: ahci Sep 10 05:18:21.826866 kernel: scsi host3: ahci Sep 10 05:18:21.827278 kernel: scsi host4: ahci Sep 10 05:18:21.829950 kernel: scsi host5: ahci Sep 10 05:18:21.830175 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 10 05:18:21.830195 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 10 05:18:21.830211 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 10 05:18:21.831350 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 10 05:18:21.831367 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 10 05:18:21.833151 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 10 05:18:21.834892 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 05:18:21.847251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 05:18:21.855935 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 05:18:21.857149 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 05:18:21.865652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 05:18:21.881911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:18:21.893254 disk-uuid[635]: Primary Header is updated. Sep 10 05:18:21.893254 disk-uuid[635]: Secondary Entries is updated. Sep 10 05:18:21.893254 disk-uuid[635]: Secondary Header is updated. Sep 10 05:18:21.897791 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:18:21.901793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:18:22.142805 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 05:18:22.142883 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 05:18:22.142894 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 05:18:22.143794 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 05:18:22.144809 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 05:18:22.145809 kernel: ata3.00: LPM support broken, forcing max_power Sep 10 05:18:22.145827 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 05:18:22.146845 kernel: ata3.00: applying bridge limits Sep 10 05:18:22.147976 kernel: ata3.00: LPM support broken, forcing max_power Sep 10 05:18:22.147990 kernel: ata3.00: configured for UDMA/100 Sep 10 05:18:22.148802 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 05:18:22.160805 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 05:18:22.201811 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 05:18:22.202073 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 05:18:22.227801 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 05:18:22.657862 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 05:18:22.659866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 05:18:22.661822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 05:18:22.663175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 05:18:22.665584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 05:18:22.847203 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 05:18:22.903818 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 05:18:22.904026 disk-uuid[636]: The operation has completed successfully. Sep 10 05:18:22.928461 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 05:18:22.928585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 05:18:22.965867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 05:18:22.990161 sh[666]: Success Sep 10 05:18:23.010496 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 05:18:23.010539 kernel: device-mapper: uevent: version 1.0.3 Sep 10 05:18:23.010560 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 05:18:23.019811 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 10 05:18:23.047873 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 05:18:23.050070 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 05:18:23.065693 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 05:18:23.070802 kernel: BTRFS: device fsid d8201365-420d-4e6d-a9af-b12a81c8fc98 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (678) Sep 10 05:18:23.072956 kernel: BTRFS info (device dm-0): first mount of filesystem d8201365-420d-4e6d-a9af-b12a81c8fc98 Sep 10 05:18:23.072979 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:18:23.077796 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 05:18:23.077858 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 05:18:23.079126 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 05:18:23.080577 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 05:18:23.081978 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 05:18:23.082780 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 05:18:23.084436 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 05:18:23.109793 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 10 05:18:23.109822 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:18:23.111545 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:18:23.114801 kernel: BTRFS info (device vda6): turning on async discard Sep 10 05:18:23.114856 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 05:18:23.119796 kernel: BTRFS info (device vda6): last unmount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:18:23.120114 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 05:18:23.122504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 05:18:23.279273 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 05:18:23.283592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 05:18:23.332122 ignition[750]: Ignition 2.22.0 Sep 10 05:18:23.332136 ignition[750]: Stage: fetch-offline Sep 10 05:18:23.332195 ignition[750]: no configs at "/usr/lib/ignition/base.d" Sep 10 05:18:23.332204 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:18:23.332327 ignition[750]: parsed url from cmdline: "" Sep 10 05:18:23.332331 ignition[750]: no config URL provided Sep 10 05:18:23.332338 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 05:18:23.332347 ignition[750]: no config at "/usr/lib/ignition/user.ign" Sep 10 05:18:23.332372 ignition[750]: op(1): [started] loading QEMU firmware config module Sep 10 05:18:23.332377 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 05:18:23.343468 ignition[750]: op(1): [finished] loading QEMU firmware config module Sep 10 05:18:23.367924 systemd-networkd[848]: lo: Link UP Sep 10 05:18:23.367934 systemd-networkd[848]: lo: Gained carrier Sep 10 05:18:23.371023 systemd-networkd[848]: Enumeration completed Sep 10 05:18:23.371723 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 05:18:23.372112 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:18:23.372117 systemd-networkd[848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 05:18:23.373464 systemd-networkd[848]: eth0: Link UP Sep 10 05:18:23.373606 systemd[1]: Reached target network.target - Network. Sep 10 05:18:23.373636 systemd-networkd[848]: eth0: Gained carrier Sep 10 05:18:23.373645 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:18:23.392813 systemd-networkd[848]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 05:18:23.395090 ignition[750]: parsing config with SHA512: 3a2a896e31996f53603b338e02396b51c0494c7d9a7d669c0f64e7487fa630628ba5e35a8a433534a898b74f48e30035e5f398efba2000ae62bdc872a8f10065 Sep 10 05:18:23.407522 unknown[750]: fetched base config from "system" Sep 10 05:18:23.407533 unknown[750]: fetched user config from "qemu" Sep 10 05:18:23.408090 ignition[750]: fetch-offline: fetch-offline passed Sep 10 05:18:23.408179 ignition[750]: Ignition finished successfully Sep 10 05:18:23.411074 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 05:18:23.413436 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 05:18:23.415443 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 05:18:23.485122 ignition[861]: Ignition 2.22.0 Sep 10 05:18:23.485136 ignition[861]: Stage: kargs Sep 10 05:18:23.485280 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 10 05:18:23.485291 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:18:23.486015 ignition[861]: kargs: kargs passed Sep 10 05:18:23.486058 ignition[861]: Ignition finished successfully Sep 10 05:18:23.493516 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 05:18:23.495039 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 05:18:23.544986 ignition[869]: Ignition 2.22.0 Sep 10 05:18:23.545000 ignition[869]: Stage: disks Sep 10 05:18:23.545131 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 10 05:18:23.545141 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:18:23.548473 ignition[869]: disks: disks passed Sep 10 05:18:23.548521 ignition[869]: Ignition finished successfully Sep 10 05:18:23.552605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 05:18:23.553326 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 05:18:23.555101 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 05:18:23.555408 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 05:18:23.555734 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 05:18:23.556205 systemd[1]: Reached target basic.target - Basic System. Sep 10 05:18:23.557356 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 05:18:23.587553 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 05:18:23.595655 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 05:18:23.599836 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 05:18:23.713796 kernel: EXT4-fs (vda9): mounted filesystem 8812db3a-0650-4908-b2d8-56c2f0883ee2 r/w with ordered data mode. Quota mode: none. Sep 10 05:18:23.714841 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 05:18:23.716895 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 05:18:23.718691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 05:18:23.720364 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 05:18:23.722626 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 05:18:23.722688 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 05:18:23.724337 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 05:18:23.731209 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 05:18:23.733968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 05:18:23.738052 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 10 05:18:23.738072 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:18:23.738083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:18:23.741297 kernel: BTRFS info (device vda6): turning on async discard Sep 10 05:18:23.741318 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 05:18:23.743457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 05:18:23.772081 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 05:18:23.777301 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 10 05:18:23.781542 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 05:18:23.786325 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 05:18:23.874286 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 05:18:23.875513 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 05:18:23.877952 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 05:18:23.911791 kernel: BTRFS info (device vda6): last unmount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:18:23.923947 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 05:18:23.940946 ignition[1001]: INFO : Ignition 2.22.0 Sep 10 05:18:23.940946 ignition[1001]: INFO : Stage: mount Sep 10 05:18:23.942604 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 05:18:23.942604 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:18:23.942604 ignition[1001]: INFO : mount: mount passed Sep 10 05:18:23.942604 ignition[1001]: INFO : Ignition finished successfully Sep 10 05:18:23.948750 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 05:18:23.950008 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 05:18:24.071605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 05:18:24.073540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 05:18:24.106613 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 10 05:18:24.106671 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 05:18:24.106691 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 05:18:24.111364 kernel: BTRFS info (device vda6): turning on async discard Sep 10 05:18:24.111404 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 05:18:24.114526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 05:18:24.172220 ignition[1030]: INFO : Ignition 2.22.0 Sep 10 05:18:24.172220 ignition[1030]: INFO : Stage: files Sep 10 05:18:24.173854 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 05:18:24.173854 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:18:24.173854 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 10 05:18:24.193335 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 05:18:24.193335 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 05:18:24.199668 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 05:18:24.201138 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 05:18:24.203205 unknown[1030]: wrote ssh authorized keys file for user: core Sep 10 05:18:24.204373 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 05:18:24.205734 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 05:18:24.205734 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 05:18:24.365003 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 05:18:24.447973 systemd-networkd[848]: eth0: Gained IPv6LL Sep 10 05:18:25.221919 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 05:18:25.221919 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 05:18:25.226150 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 05:18:25.226150 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 05:18:25.226150 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 05:18:25.226150 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 05:18:25.226150 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 05:18:25.226150 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 05:18:25.226150 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 05:18:25.238464 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 05:18:25.238464 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 05:18:25.238464 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:18:25.238464 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:18:25.238464 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:18:25.238464 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 05:18:25.747760 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 05:18:26.794586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 05:18:26.794586 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 05:18:26.798415 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 05:18:26.803125 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 05:18:26.803125 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 05:18:26.803125 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 05:18:26.807341 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 05:18:26.807341 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 05:18:26.807341 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 05:18:26.807341 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 05:18:26.829303 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 05:18:26.833700 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 05:18:26.835273 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 05:18:26.835273 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 05:18:26.835273 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 05:18:26.835273 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 05:18:26.835273 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 05:18:26.835273 ignition[1030]: INFO : files: files passed Sep 10 05:18:26.835273 ignition[1030]: INFO : Ignition finished successfully Sep 10 05:18:26.843129 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 05:18:26.846226 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 05:18:26.848606 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 05:18:26.871038 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 05:18:26.871189 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 05:18:26.875630 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 05:18:26.878812 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 05:18:26.880423 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 05:18:26.881888 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 05:18:26.883876 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 05:18:26.886471 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 05:18:26.888064 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 05:18:26.954285 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 05:18:26.954417 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 05:18:26.957506 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 05:18:26.959409 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 05:18:26.959750 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 05:18:26.960736 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 05:18:26.977862 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 05:18:26.981434 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 05:18:27.011128 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 05:18:27.013395 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 05:18:27.015680 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 05:18:27.016297 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 05:18:27.016483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 05:18:27.019723 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 05:18:27.020185 systemd[1]: Stopped target basic.target - Basic System. Sep 10 05:18:27.020495 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 05:18:27.020856 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 05:18:27.021310 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 05:18:27.021600 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 05:18:27.022081 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 05:18:27.022405 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 05:18:27.022762 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 05:18:27.023246 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 05:18:27.039292 systemd[1]: Stopped target swap.target - Swaps. Sep 10 05:18:27.039613 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 05:18:27.039789 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 05:18:27.041552 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 05:18:27.042041 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 05:18:27.042317 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 05:18:27.047322 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 05:18:27.048064 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 05:18:27.048170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 05:18:27.053173 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 05:18:27.053299 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 05:18:27.053721 systemd[1]: Stopped target paths.target - Path Units. Sep 10 05:18:27.054112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 05:18:27.060837 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 05:18:27.061337 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 05:18:27.061650 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 05:18:27.062130 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 05:18:27.062220 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 05:18:27.066876 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 05:18:27.066962 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 05:18:27.067329 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 05:18:27.067440 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 05:18:27.070087 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 05:18:27.070200 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 05:18:27.074471 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 05:18:27.074993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 05:18:27.075118 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 05:18:27.078179 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 05:18:27.079759 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 05:18:27.079887 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 05:18:27.082174 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 05:18:27.082273 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 05:18:27.089332 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 05:18:27.098906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 05:18:27.118913 ignition[1085]: INFO : Ignition 2.22.0 Sep 10 05:18:27.118913 ignition[1085]: INFO : Stage: umount Sep 10 05:18:27.118913 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 05:18:27.118913 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 05:18:27.118913 ignition[1085]: INFO : umount: umount passed Sep 10 05:18:27.118913 ignition[1085]: INFO : Ignition finished successfully Sep 10 05:18:27.120863 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 05:18:27.121481 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 05:18:27.121608 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 05:18:27.123099 systemd[1]: Stopped target network.target - Network. Sep 10 05:18:27.124549 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 05:18:27.124615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 05:18:27.126211 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 05:18:27.126256 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 05:18:27.127944 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 05:18:27.127992 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 05:18:27.129189 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 05:18:27.129232 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 05:18:27.129617 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 05:18:27.132518 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 05:18:27.143252 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 05:18:27.143399 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 05:18:27.148178 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 05:18:27.148421 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 05:18:27.148535 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 05:18:27.152007 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 05:18:27.152885 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 05:18:27.153705 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 05:18:27.153748 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 05:18:27.156786 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 05:18:27.157727 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 05:18:27.157791 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 05:18:27.160432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 05:18:27.160478 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:18:27.163307 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 05:18:27.163353 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 05:18:27.163999 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 05:18:27.164039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 05:18:27.168296 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 05:18:27.169604 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 05:18:27.169666 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 05:18:27.179055 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 05:18:27.179183 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 05:18:27.191470 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 05:18:27.191653 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 05:18:27.192452 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 05:18:27.192496 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 05:18:27.195015 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 05:18:27.195052 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 05:18:27.195301 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 05:18:27.195345 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 05:18:27.196088 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 05:18:27.196135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 05:18:27.196746 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 05:18:27.196800 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 05:18:27.206093 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 05:18:27.206494 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 05:18:27.206540 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 05:18:27.211081 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 05:18:27.211130 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 05:18:27.214355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 05:18:27.214401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:18:27.218650 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 05:18:27.218706 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 05:18:27.218754 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 05:18:27.231442 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 05:18:27.231569 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 05:18:27.252009 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 05:18:27.252138 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 05:18:27.252884 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 05:18:27.253138 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 05:18:27.253187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 05:18:27.254181 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 05:18:27.273345 systemd[1]: Switching root. Sep 10 05:18:27.327322 systemd-journald[219]: Journal stopped Sep 10 05:18:28.388404 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 10 05:18:28.388467 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 05:18:28.388487 kernel: SELinux: policy capability open_perms=1 Sep 10 05:18:28.388499 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 05:18:28.388510 kernel: SELinux: policy capability always_check_network=0 Sep 10 05:18:28.388521 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 05:18:28.388545 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 05:18:28.388556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 05:18:28.388567 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 05:18:28.388583 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 05:18:28.388601 kernel: audit: type=1403 audit(1757481507.631:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 05:18:28.388614 systemd[1]: Successfully loaded SELinux policy in 59.770ms. Sep 10 05:18:28.388636 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.219ms. Sep 10 05:18:28.388650 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 05:18:28.388662 systemd[1]: Detected virtualization kvm. Sep 10 05:18:28.388678 systemd[1]: Detected architecture x86-64. Sep 10 05:18:28.388690 systemd[1]: Detected first boot. Sep 10 05:18:28.388702 systemd[1]: Initializing machine ID from VM UUID. Sep 10 05:18:28.388714 zram_generator::config[1131]: No configuration found. Sep 10 05:18:28.388727 kernel: Guest personality initialized and is inactive Sep 10 05:18:28.388738 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 10 05:18:28.388749 kernel: Initialized host personality Sep 10 05:18:28.388761 kernel: NET: Registered PF_VSOCK protocol family Sep 10 05:18:28.388903 systemd[1]: Populated /etc with preset unit settings. Sep 10 05:18:28.388921 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 05:18:28.388943 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 05:18:28.388955 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 05:18:28.388967 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 05:18:28.388979 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 05:18:28.388997 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 05:18:28.389013 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 05:18:28.389030 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 05:18:28.389048 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 05:18:28.389064 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 05:18:28.389080 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 05:18:28.389102 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 05:18:28.389119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 05:18:28.389135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 05:18:28.389151 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 05:18:28.389167 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 05:18:28.389183 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 05:18:28.389203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 05:18:28.389219 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 05:18:28.389234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 05:18:28.389250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 05:18:28.389266 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 05:18:28.389282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 05:18:28.389299 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 05:18:28.389315 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 05:18:28.389335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 05:18:28.389352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 05:18:28.389370 systemd[1]: Reached target slices.target - Slice Units. Sep 10 05:18:28.389388 systemd[1]: Reached target swap.target - Swaps. Sep 10 05:18:28.389406 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 05:18:28.389422 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 05:18:28.389438 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 05:18:28.389454 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 05:18:28.389470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 05:18:28.389490 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 05:18:28.389506 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 05:18:28.389530 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 05:18:28.389547 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 05:18:28.389564 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 05:18:28.389580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:18:28.389597 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 05:18:28.389613 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 05:18:28.389628 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 05:18:28.389649 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 05:18:28.389665 systemd[1]: Reached target machines.target - Containers. Sep 10 05:18:28.389681 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 05:18:28.389698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 05:18:28.389714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 05:18:28.389731 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 05:18:28.389747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 05:18:28.389778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 05:18:28.389800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 05:18:28.389816 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 05:18:28.389833 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 05:18:28.389850 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 05:18:28.389866 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 05:18:28.389883 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 05:18:28.389899 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 05:18:28.389915 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 05:18:28.389939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 05:18:28.389956 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 05:18:28.389971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 05:18:28.389987 kernel: fuse: init (API version 7.41) Sep 10 05:18:28.390002 kernel: loop: module loaded Sep 10 05:18:28.390018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 05:18:28.390035 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 05:18:28.390052 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 05:18:28.390072 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 05:18:28.390089 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 05:18:28.390105 systemd[1]: Stopped verity-setup.service. Sep 10 05:18:28.390122 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:18:28.390141 kernel: ACPI: bus type drm_connector registered Sep 10 05:18:28.390157 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 05:18:28.390174 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 05:18:28.390190 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 05:18:28.390207 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 05:18:28.390223 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 05:18:28.390240 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 05:18:28.390259 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 05:18:28.390300 systemd-journald[1202]: Collecting audit messages is disabled. Sep 10 05:18:28.390333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 05:18:28.390349 systemd-journald[1202]: Journal started Sep 10 05:18:28.390378 systemd-journald[1202]: Runtime Journal (/run/log/journal/c67de5b1a76048279c632ac32270029e) is 6M, max 48.4M, 42.4M free. Sep 10 05:18:28.136660 systemd[1]: Queued start job for default target multi-user.target. Sep 10 05:18:28.159694 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 05:18:28.160139 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 05:18:28.391897 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 05:18:28.393253 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 05:18:28.393464 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 05:18:28.394905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 05:18:28.395117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 05:18:28.396473 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 05:18:28.396682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 05:18:28.398005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 05:18:28.398218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 05:18:28.399730 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 05:18:28.399965 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 05:18:28.401269 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 05:18:28.401468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 05:18:28.402975 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 05:18:28.404348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 05:18:28.405853 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 05:18:28.407344 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 05:18:28.420579 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 05:18:28.423057 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 05:18:28.425123 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 05:18:28.426282 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 05:18:28.426369 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 05:18:28.428294 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 05:18:28.436879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 05:18:28.438696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 05:18:28.439857 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 05:18:28.442543 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 05:18:28.443896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 05:18:28.447871 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 05:18:28.448998 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 05:18:28.450992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 05:18:28.454010 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 05:18:28.458330 systemd-journald[1202]: Time spent on flushing to /var/log/journal/c67de5b1a76048279c632ac32270029e is 18.674ms for 1070 entries. Sep 10 05:18:28.458330 systemd-journald[1202]: System Journal (/var/log/journal/c67de5b1a76048279c632ac32270029e) is 8M, max 195.6M, 187.6M free. Sep 10 05:18:28.486154 systemd-journald[1202]: Received client request to flush runtime journal. Sep 10 05:18:28.486193 kernel: loop0: detected capacity change from 0 to 110984 Sep 10 05:18:28.464623 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 05:18:28.467802 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 05:18:28.469074 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 05:18:28.477145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 05:18:28.482946 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 05:18:28.486129 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 05:18:28.492939 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 05:18:28.495136 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 05:18:28.504798 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 05:18:28.504828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 05:18:28.524294 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 05:18:28.532032 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 05:18:28.533786 kernel: loop1: detected capacity change from 0 to 128016 Sep 10 05:18:28.537243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 05:18:28.560793 kernel: loop2: detected capacity change from 0 to 221472 Sep 10 05:18:28.561717 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 10 05:18:28.561734 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 10 05:18:28.568287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 05:18:28.590793 kernel: loop3: detected capacity change from 0 to 110984 Sep 10 05:18:28.598805 kernel: loop4: detected capacity change from 0 to 128016 Sep 10 05:18:28.612814 kernel: loop5: detected capacity change from 0 to 221472 Sep 10 05:18:28.620245 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 05:18:28.620916 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 10 05:18:28.625293 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 05:18:28.625311 systemd[1]: Reloading... Sep 10 05:18:28.676796 zram_generator::config[1297]: No configuration found. Sep 10 05:18:28.781614 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 05:18:28.868300 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 05:18:28.868703 systemd[1]: Reloading finished in 242 ms. Sep 10 05:18:28.903628 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 05:18:28.905545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 05:18:28.919169 systemd[1]: Starting ensure-sysext.service... Sep 10 05:18:28.920970 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 05:18:28.936956 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 10 05:18:28.936968 systemd[1]: Reloading... Sep 10 05:18:28.942082 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 05:18:28.942127 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 05:18:28.942445 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 05:18:28.942707 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 05:18:28.944062 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 05:18:28.944328 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 10 05:18:28.944399 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 10 05:18:28.948875 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 05:18:28.948885 systemd-tmpfiles[1338]: Skipping /boot Sep 10 05:18:28.958983 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 05:18:28.959053 systemd-tmpfiles[1338]: Skipping /boot Sep 10 05:18:28.984850 zram_generator::config[1368]: No configuration found. Sep 10 05:18:29.163901 systemd[1]: Reloading finished in 226 ms. Sep 10 05:18:29.189300 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 05:18:29.210573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 05:18:29.219301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 05:18:29.222296 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 05:18:29.224741 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 05:18:29.233762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 05:18:29.237140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 05:18:29.239838 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 05:18:29.245305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:18:29.245585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 05:18:29.252588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 05:18:29.256939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 05:18:29.259460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 05:18:29.261233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 05:18:29.261338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 05:18:29.267003 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 05:18:29.268213 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:18:29.273560 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 05:18:29.280805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 05:18:29.281043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 05:18:29.283016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 05:18:29.283351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 05:18:29.285205 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 05:18:29.285421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 05:18:29.288836 systemd-udevd[1408]: Using default interface naming scheme 'v255'. Sep 10 05:18:29.295620 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 05:18:29.296335 augenrules[1437]: No rules Sep 10 05:18:29.297554 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 05:18:29.297952 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 05:18:29.304138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:18:29.305447 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 05:18:29.307976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 05:18:29.309054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 05:18:29.312062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 05:18:29.325848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 05:18:29.328610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 05:18:29.330153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 05:18:29.330280 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 05:18:29.331528 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 05:18:29.333844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 05:18:29.335131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 05:18:29.336895 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 05:18:29.346042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 05:18:29.348118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 05:18:29.348730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 05:18:29.350975 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 05:18:29.351298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 05:18:29.353223 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 05:18:29.353423 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 05:18:29.355059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 05:18:29.355254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 05:18:29.363789 augenrules[1444]: /sbin/augenrules: No change Sep 10 05:18:29.370530 systemd[1]: Finished ensure-sysext.service. Sep 10 05:18:29.383560 augenrules[1504]: No rules Sep 10 05:18:29.383808 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 05:18:29.385125 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 05:18:29.385188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 05:18:29.388940 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 05:18:29.390710 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 05:18:29.391531 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 05:18:29.392868 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 05:18:29.395023 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 05:18:29.415830 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 05:18:29.480550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 05:18:29.489758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 05:18:29.492845 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 05:18:29.496797 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 10 05:18:29.500793 kernel: ACPI: button: Power Button [PWRF] Sep 10 05:18:29.517968 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 05:18:29.536632 systemd-resolved[1407]: Positive Trust Anchors: Sep 10 05:18:29.539021 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 05:18:29.539055 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 05:18:29.542323 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 10 05:18:29.542685 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 05:18:29.543020 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 05:18:29.546366 systemd-resolved[1407]: Defaulting to hostname 'linux'. Sep 10 05:18:29.558058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 05:18:29.559985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 05:18:29.568708 systemd-networkd[1509]: lo: Link UP Sep 10 05:18:29.568982 systemd-networkd[1509]: lo: Gained carrier Sep 10 05:18:29.572140 systemd-networkd[1509]: Enumeration completed Sep 10 05:18:29.572261 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 05:18:29.573586 systemd[1]: Reached target network.target - Network. Sep 10 05:18:29.574488 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:18:29.574593 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 05:18:29.576866 systemd-networkd[1509]: eth0: Link UP Sep 10 05:18:29.577087 systemd-networkd[1509]: eth0: Gained carrier Sep 10 05:18:29.577401 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 05:18:29.578596 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 05:18:29.582981 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 05:18:29.590830 systemd-networkd[1509]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 05:18:29.608140 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 05:18:29.609658 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 05:18:29.611069 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 05:18:29.612509 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 05:18:29.613745 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 05:18:30.680755 systemd-resolved[1407]: Clock change detected. Flushing caches. Sep 10 05:18:30.680783 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 05:18:30.680823 systemd-timesyncd[1510]: Initial clock synchronization to Wed 2025-09-10 05:18:30.680708 UTC. Sep 10 05:18:30.684341 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 10 05:18:30.685476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 05:18:30.686691 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 05:18:30.686723 systemd[1]: Reached target paths.target - Path Units. Sep 10 05:18:30.687630 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 05:18:30.688818 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 05:18:30.690000 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 05:18:30.691547 systemd[1]: Reached target timers.target - Timer Units. Sep 10 05:18:30.693157 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 05:18:30.698078 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 05:18:30.722526 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 05:18:30.724823 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 05:18:30.726571 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 05:18:30.787597 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 05:18:30.789373 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 05:18:30.791292 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 05:18:30.814792 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 05:18:30.815993 systemd[1]: Reached target basic.target - Basic System. Sep 10 05:18:30.817182 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 05:18:30.817325 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 05:18:30.821587 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 05:18:30.823703 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 05:18:30.827704 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 05:18:30.839238 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 05:18:30.841183 kernel: kvm_amd: TSC scaling supported Sep 10 05:18:30.841218 kernel: kvm_amd: Nested Virtualization enabled Sep 10 05:18:30.841231 kernel: kvm_amd: Nested Paging enabled Sep 10 05:18:30.842945 kernel: kvm_amd: LBR virtualization supported Sep 10 05:18:30.842968 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 05:18:30.842981 kernel: kvm_amd: Virtual GIF supported Sep 10 05:18:30.848673 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 05:18:30.849762 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 05:18:30.850991 jq[1552]: false Sep 10 05:18:30.851852 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 10 05:18:30.854818 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 05:18:30.857064 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 05:18:30.864008 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Sep 10 05:18:30.861890 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 05:18:30.867781 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Sep 10 05:18:30.864264 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 05:18:30.870623 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 05:18:30.872456 extend-filesystems[1553]: Found /dev/vda6 Sep 10 05:18:30.874945 oslogin_cache_refresh[1554]: Failure getting users, quitting Sep 10 05:18:30.876594 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Sep 10 05:18:30.876594 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 10 05:18:30.876594 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Sep 10 05:18:30.873609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 05:18:30.874968 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 10 05:18:30.875531 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 05:18:30.875027 oslogin_cache_refresh[1554]: Refreshing group entry cache Sep 10 05:18:30.875993 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 05:18:30.877020 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 05:18:30.881773 extend-filesystems[1553]: Found /dev/vda9 Sep 10 05:18:30.883602 extend-filesystems[1553]: Checking size of /dev/vda9 Sep 10 05:18:30.885728 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Sep 10 05:18:30.885728 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 10 05:18:30.884281 oslogin_cache_refresh[1554]: Failure getting groups, quitting Sep 10 05:18:30.884494 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 05:18:30.884292 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 10 05:18:30.889982 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 05:18:30.891516 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 05:18:30.896980 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 05:18:30.897475 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 10 05:18:30.897735 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 10 05:18:30.980966 jq[1572]: true Sep 10 05:18:30.916041 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 05:18:30.916344 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 05:18:30.998789 update_engine[1570]: I20250910 05:18:30.998397 1570 main.cc:92] Flatcar Update Engine starting Sep 10 05:18:31.002234 jq[1582]: true Sep 10 05:18:31.002627 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 05:18:31.002933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 05:18:31.005875 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 05:18:31.012629 extend-filesystems[1553]: Resized partition /dev/vda9 Sep 10 05:18:31.024570 kernel: EDAC MC: Ver: 3.0.0 Sep 10 05:18:31.024603 extend-filesystems[1594]: resize2fs 1.47.3 (8-Jul-2025) Sep 10 05:18:31.025783 tar[1577]: linux-amd64/helm Sep 10 05:18:31.031520 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 05:18:31.052507 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 05:18:31.079933 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 05:18:31.079933 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 05:18:31.079933 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 05:18:31.083371 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Sep 10 05:18:31.082356 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 05:18:31.083680 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 05:18:31.090469 dbus-daemon[1550]: [system] SELinux support is enabled Sep 10 05:18:31.095052 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (Power Button) Sep 10 05:18:31.095083 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 05:18:31.096053 systemd-logind[1563]: New seat seat0. Sep 10 05:18:31.096706 update_engine[1570]: I20250910 05:18:31.096654 1570 update_check_scheduler.cc:74] Next update check in 6m13s Sep 10 05:18:31.147110 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Sep 10 05:18:31.195388 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 05:18:31.209359 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 05:18:31.213046 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 05:18:31.214607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 05:18:31.216138 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 05:18:31.230540 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 05:18:31.236553 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 10 05:18:31.240130 systemd[1]: Started update-engine.service - Update Engine. Sep 10 05:18:31.244872 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 05:18:31.246238 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 05:18:31.246498 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 05:18:31.246677 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 05:18:31.248037 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 05:18:31.248142 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 05:18:31.250749 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 05:18:31.265020 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 05:18:31.265313 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 05:18:31.267986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 05:18:31.294208 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 05:18:31.296080 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 05:18:31.364453 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 05:18:31.370373 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 05:18:31.371691 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 05:18:31.460572 tar[1577]: linux-amd64/LICENSE Sep 10 05:18:31.460703 tar[1577]: linux-amd64/README.md Sep 10 05:18:31.503093 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 05:18:31.507347 containerd[1583]: time="2025-09-10T05:18:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 05:18:31.508357 containerd[1583]: time="2025-09-10T05:18:31.508291572Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 05:18:31.523300 containerd[1583]: time="2025-09-10T05:18:31.523225779Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="68.739µs" Sep 10 05:18:31.523300 containerd[1583]: time="2025-09-10T05:18:31.523281092Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 05:18:31.523300 containerd[1583]: time="2025-09-10T05:18:31.523306620Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 05:18:31.523738 containerd[1583]: time="2025-09-10T05:18:31.523703845Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 05:18:31.523738 containerd[1583]: time="2025-09-10T05:18:31.523726989Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 05:18:31.523780 containerd[1583]: time="2025-09-10T05:18:31.523758047Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 05:18:31.523933 containerd[1583]: time="2025-09-10T05:18:31.523902167Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 05:18:31.523933 containerd[1583]: time="2025-09-10T05:18:31.523918327Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524256 containerd[1583]: time="2025-09-10T05:18:31.524220254Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524256 containerd[1583]: time="2025-09-10T05:18:31.524239269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524256 containerd[1583]: time="2025-09-10T05:18:31.524250921Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524324 containerd[1583]: time="2025-09-10T05:18:31.524259618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524401 containerd[1583]: time="2025-09-10T05:18:31.524370746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524680 containerd[1583]: time="2025-09-10T05:18:31.524644800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524714 containerd[1583]: time="2025-09-10T05:18:31.524683743Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 05:18:31.524714 containerd[1583]: time="2025-09-10T05:18:31.524694884Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 05:18:31.524754 containerd[1583]: time="2025-09-10T05:18:31.524732605Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 05:18:31.525173 containerd[1583]: time="2025-09-10T05:18:31.525140590Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 05:18:31.525250 containerd[1583]: time="2025-09-10T05:18:31.525227473Z" level=info msg="metadata content store policy set" policy=shared Sep 10 05:18:31.531369 containerd[1583]: time="2025-09-10T05:18:31.531311202Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 05:18:31.531502 containerd[1583]: time="2025-09-10T05:18:31.531387124Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 05:18:31.531502 containerd[1583]: time="2025-09-10T05:18:31.531408003Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 05:18:31.531502 containerd[1583]: time="2025-09-10T05:18:31.531421288Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 05:18:31.531502 containerd[1583]: time="2025-09-10T05:18:31.531436887Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 05:18:31.531502 containerd[1583]: time="2025-09-10T05:18:31.531453158Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 05:18:31.531502 containerd[1583]: time="2025-09-10T05:18:31.531470090Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 05:18:31.531502 containerd[1583]: time="2025-09-10T05:18:31.531498814Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 05:18:31.531645 containerd[1583]: time="2025-09-10T05:18:31.531511107Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 05:18:31.531645 containerd[1583]: time="2025-09-10T05:18:31.531534701Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 05:18:31.531645 containerd[1583]: time="2025-09-10T05:18:31.531543998Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 05:18:31.531645 containerd[1583]: time="2025-09-10T05:18:31.531557594Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 05:18:31.531741 containerd[1583]: time="2025-09-10T05:18:31.531724477Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 05:18:31.531768 containerd[1583]: time="2025-09-10T05:18:31.531750786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 05:18:31.531797 containerd[1583]: time="2025-09-10T05:18:31.531767097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 05:18:31.531797 containerd[1583]: time="2025-09-10T05:18:31.531785271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 05:18:31.531834 containerd[1583]: time="2025-09-10T05:18:31.531812021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 05:18:31.531834 containerd[1583]: time="2025-09-10T05:18:31.531823883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 05:18:31.531878 containerd[1583]: time="2025-09-10T05:18:31.531836647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 05:18:31.531878 containerd[1583]: time="2025-09-10T05:18:31.531847938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 05:18:31.531878 containerd[1583]: time="2025-09-10T05:18:31.531859280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 05:18:31.531878 containerd[1583]: time="2025-09-10T05:18:31.531869869Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 05:18:31.531957 containerd[1583]: time="2025-09-10T05:18:31.531893263Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 05:18:31.532035 containerd[1583]: time="2025-09-10T05:18:31.532002508Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 05:18:31.532035 containerd[1583]: time="2025-09-10T05:18:31.532028497Z" level=info msg="Start snapshots syncer" Sep 10 05:18:31.532086 containerd[1583]: time="2025-09-10T05:18:31.532056199Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 05:18:31.532397 containerd[1583]: time="2025-09-10T05:18:31.532346233Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532404502Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532506944Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532600410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532618864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532630606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532642438Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532653509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532663698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532679167Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532782812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532793792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532803871Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532826604Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 05:18:31.533499 containerd[1583]: time="2025-09-10T05:18:31.532848354Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.532857692Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.532868412Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.532879513Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.532889542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.532900582Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.532926822Z" level=info msg="runtime interface created" Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.532932292Z" level=info msg="created NRI interface" Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.533021379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.533032330Z" level=info msg="Connect containerd service" Sep 10 05:18:31.533759 containerd[1583]: time="2025-09-10T05:18:31.533055593Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 05:18:31.535814 containerd[1583]: time="2025-09-10T05:18:31.535778019Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 05:18:31.653583 containerd[1583]: time="2025-09-10T05:18:31.653450456Z" level=info msg="Start subscribing containerd event" Sep 10 05:18:31.653583 containerd[1583]: time="2025-09-10T05:18:31.653530567Z" level=info msg="Start recovering state" Sep 10 05:18:31.653726 containerd[1583]: time="2025-09-10T05:18:31.653689936Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 05:18:31.653765 containerd[1583]: time="2025-09-10T05:18:31.653701317Z" level=info msg="Start event monitor" Sep 10 05:18:31.653787 containerd[1583]: time="2025-09-10T05:18:31.653770587Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 05:18:31.653808 containerd[1583]: time="2025-09-10T05:18:31.653789102Z" level=info msg="Start cni network conf syncer for default" Sep 10 05:18:31.653808 containerd[1583]: time="2025-09-10T05:18:31.653802216Z" level=info msg="Start streaming server" Sep 10 05:18:31.654085 containerd[1583]: time="2025-09-10T05:18:31.653823266Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 05:18:31.654137 containerd[1583]: time="2025-09-10T05:18:31.654083273Z" level=info msg="runtime interface starting up..." Sep 10 05:18:31.654137 containerd[1583]: time="2025-09-10T05:18:31.654103511Z" level=info msg="starting plugins..." Sep 10 05:18:31.654137 containerd[1583]: time="2025-09-10T05:18:31.654120754Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 05:18:31.654318 containerd[1583]: time="2025-09-10T05:18:31.654293748Z" level=info msg="containerd successfully booted in 0.147524s" Sep 10 05:18:31.654459 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 05:18:32.298716 systemd-networkd[1509]: eth0: Gained IPv6LL Sep 10 05:18:32.301746 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 05:18:32.303624 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 05:18:32.306292 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 05:18:32.308787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:18:32.310928 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 05:18:32.356496 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 05:18:32.358078 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 05:18:32.358362 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 05:18:32.360879 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 05:18:33.698335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:18:33.700021 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 05:18:33.701368 systemd[1]: Startup finished in 3.165s (kernel) + 6.997s (initrd) + 5.061s (userspace) = 15.224s. Sep 10 05:18:33.716912 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 05:18:34.276773 kubelet[1692]: E0910 05:18:34.276695 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 05:18:34.280807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 05:18:34.281003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 05:18:34.281399 systemd[1]: kubelet.service: Consumed 1.767s CPU time, 265.1M memory peak. Sep 10 05:18:34.896767 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 05:18:34.897955 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:35902.service - OpenSSH per-connection server daemon (10.0.0.1:35902). Sep 10 05:18:34.966822 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 35902 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:18:34.968593 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:18:34.974902 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 05:18:34.976012 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 05:18:34.981967 systemd-logind[1563]: New session 1 of user core. Sep 10 05:18:35.004220 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 05:18:35.007108 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 05:18:35.022692 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 05:18:35.024797 systemd-logind[1563]: New session c1 of user core. Sep 10 05:18:35.172682 systemd[1710]: Queued start job for default target default.target. Sep 10 05:18:35.191677 systemd[1710]: Created slice app.slice - User Application Slice. Sep 10 05:18:35.191701 systemd[1710]: Reached target paths.target - Paths. Sep 10 05:18:35.191739 systemd[1710]: Reached target timers.target - Timers. Sep 10 05:18:35.193170 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 05:18:35.204272 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 05:18:35.204393 systemd[1710]: Reached target sockets.target - Sockets. Sep 10 05:18:35.204433 systemd[1710]: Reached target basic.target - Basic System. Sep 10 05:18:35.204478 systemd[1710]: Reached target default.target - Main User Target. Sep 10 05:18:35.204529 systemd[1710]: Startup finished in 173ms. Sep 10 05:18:35.204790 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 05:18:35.206297 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 05:18:35.267554 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:35918.service - OpenSSH per-connection server daemon (10.0.0.1:35918). Sep 10 05:18:35.308425 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 35918 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:18:35.309633 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:18:35.313763 systemd-logind[1563]: New session 2 of user core. Sep 10 05:18:35.327596 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 05:18:35.379852 sshd[1724]: Connection closed by 10.0.0.1 port 35918 Sep 10 05:18:35.380176 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 10 05:18:35.389006 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:35918.service: Deactivated successfully. Sep 10 05:18:35.390747 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 05:18:35.391458 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Sep 10 05:18:35.394013 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:35920.service - OpenSSH per-connection server daemon (10.0.0.1:35920). Sep 10 05:18:35.394586 systemd-logind[1563]: Removed session 2. Sep 10 05:18:35.450667 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 35920 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:18:35.451773 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:18:35.455936 systemd-logind[1563]: New session 3 of user core. Sep 10 05:18:35.466613 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 05:18:35.515090 sshd[1733]: Connection closed by 10.0.0.1 port 35920 Sep 10 05:18:35.515459 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 10 05:18:35.529953 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:35920.service: Deactivated successfully. Sep 10 05:18:35.531506 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 05:18:35.532282 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Sep 10 05:18:35.534706 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:35926.service - OpenSSH per-connection server daemon (10.0.0.1:35926). Sep 10 05:18:35.535429 systemd-logind[1563]: Removed session 3. Sep 10 05:18:35.597776 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 35926 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:18:35.598957 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:18:35.603236 systemd-logind[1563]: New session 4 of user core. Sep 10 05:18:35.614600 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 05:18:35.667734 sshd[1743]: Connection closed by 10.0.0.1 port 35926 Sep 10 05:18:35.668134 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 10 05:18:35.684793 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:35926.service: Deactivated successfully. Sep 10 05:18:35.686429 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 05:18:35.687253 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Sep 10 05:18:35.689570 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:35940.service - OpenSSH per-connection server daemon (10.0.0.1:35940). Sep 10 05:18:35.690146 systemd-logind[1563]: Removed session 4. Sep 10 05:18:35.743315 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 35940 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:18:35.744918 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:18:35.749331 systemd-logind[1563]: New session 5 of user core. Sep 10 05:18:35.759772 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 05:18:35.818126 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 05:18:35.818429 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:18:35.841960 sudo[1754]: pam_unix(sudo:session): session closed for user root Sep 10 05:18:35.843581 sshd[1753]: Connection closed by 10.0.0.1 port 35940 Sep 10 05:18:35.844017 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 10 05:18:35.862019 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:35940.service: Deactivated successfully. Sep 10 05:18:35.863641 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 05:18:35.864466 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Sep 10 05:18:35.866788 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:35954.service - OpenSSH per-connection server daemon (10.0.0.1:35954). Sep 10 05:18:35.867554 systemd-logind[1563]: Removed session 5. Sep 10 05:18:35.919982 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 35954 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:18:35.921197 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:18:35.925767 systemd-logind[1563]: New session 6 of user core. Sep 10 05:18:35.937643 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 05:18:35.990478 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 05:18:35.990845 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:18:35.999220 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 10 05:18:36.005705 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 05:18:36.006009 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:18:36.016328 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 05:18:36.065430 augenrules[1787]: No rules Sep 10 05:18:36.066953 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 05:18:36.067224 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 05:18:36.068290 sudo[1764]: pam_unix(sudo:session): session closed for user root Sep 10 05:18:36.069654 sshd[1763]: Connection closed by 10.0.0.1 port 35954 Sep 10 05:18:36.069952 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Sep 10 05:18:36.077849 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:35954.service: Deactivated successfully. Sep 10 05:18:36.079437 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 05:18:36.080208 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Sep 10 05:18:36.082472 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:35962.service - OpenSSH per-connection server daemon (10.0.0.1:35962). Sep 10 05:18:36.083092 systemd-logind[1563]: Removed session 6. Sep 10 05:18:36.131206 sshd[1796]: Accepted publickey for core from 10.0.0.1 port 35962 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:18:36.132399 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:18:36.136580 systemd-logind[1563]: New session 7 of user core. Sep 10 05:18:36.151603 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 05:18:36.204000 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 05:18:36.204316 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 05:18:36.857506 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 05:18:36.876162 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 05:18:37.389139 dockerd[1820]: time="2025-09-10T05:18:37.389046526Z" level=info msg="Starting up" Sep 10 05:18:37.389896 dockerd[1820]: time="2025-09-10T05:18:37.389869299Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 05:18:37.411271 dockerd[1820]: time="2025-09-10T05:18:37.411217605Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 05:18:37.679296 dockerd[1820]: time="2025-09-10T05:18:37.679198415Z" level=info msg="Loading containers: start." Sep 10 05:18:37.689503 kernel: Initializing XFRM netlink socket Sep 10 05:18:37.944239 systemd-networkd[1509]: docker0: Link UP Sep 10 05:18:37.949361 dockerd[1820]: time="2025-09-10T05:18:37.949314008Z" level=info msg="Loading containers: done." Sep 10 05:18:37.965671 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1463481790-merged.mount: Deactivated successfully. Sep 10 05:18:37.967155 dockerd[1820]: time="2025-09-10T05:18:37.967095996Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 05:18:37.967237 dockerd[1820]: time="2025-09-10T05:18:37.967197145Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 05:18:37.967298 dockerd[1820]: time="2025-09-10T05:18:37.967283036Z" level=info msg="Initializing buildkit" Sep 10 05:18:37.998188 dockerd[1820]: time="2025-09-10T05:18:37.998110098Z" level=info msg="Completed buildkit initialization" Sep 10 05:18:38.005290 dockerd[1820]: time="2025-09-10T05:18:38.004783563Z" level=info msg="Daemon has completed initialization" Sep 10 05:18:38.005290 dockerd[1820]: time="2025-09-10T05:18:38.004852212Z" level=info msg="API listen on /run/docker.sock" Sep 10 05:18:38.005506 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 05:18:38.923340 containerd[1583]: time="2025-09-10T05:18:38.923287957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 10 05:18:39.691191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366405435.mount: Deactivated successfully. Sep 10 05:18:40.783636 containerd[1583]: time="2025-09-10T05:18:40.783558358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:40.784471 containerd[1583]: time="2025-09-10T05:18:40.784422118Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 10 05:18:40.785751 containerd[1583]: time="2025-09-10T05:18:40.785716665Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:40.791602 containerd[1583]: time="2025-09-10T05:18:40.788457135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:40.794138 containerd[1583]: time="2025-09-10T05:18:40.794081372Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.870745015s" Sep 10 05:18:40.794138 containerd[1583]: time="2025-09-10T05:18:40.794133540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 10 05:18:40.795316 containerd[1583]: time="2025-09-10T05:18:40.795286973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 10 05:18:42.122758 containerd[1583]: time="2025-09-10T05:18:42.122696209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:42.123611 containerd[1583]: time="2025-09-10T05:18:42.123542266Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 10 05:18:42.124874 containerd[1583]: time="2025-09-10T05:18:42.124822767Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:42.127509 containerd[1583]: time="2025-09-10T05:18:42.127451597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:42.128418 containerd[1583]: time="2025-09-10T05:18:42.128372564Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.333053531s" Sep 10 05:18:42.128418 containerd[1583]: time="2025-09-10T05:18:42.128404885Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 10 05:18:42.128939 containerd[1583]: time="2025-09-10T05:18:42.128914240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 10 05:18:43.546397 containerd[1583]: time="2025-09-10T05:18:43.546337825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:43.547095 containerd[1583]: time="2025-09-10T05:18:43.547036977Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 10 05:18:43.548238 containerd[1583]: time="2025-09-10T05:18:43.548204015Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:43.550756 containerd[1583]: time="2025-09-10T05:18:43.550727838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:43.551718 containerd[1583]: time="2025-09-10T05:18:43.551649126Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.422707505s" Sep 10 05:18:43.551718 containerd[1583]: time="2025-09-10T05:18:43.551695944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 10 05:18:43.552303 containerd[1583]: time="2025-09-10T05:18:43.552247258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 10 05:18:44.367828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 05:18:44.369744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:18:44.682215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:18:44.693879 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 05:18:44.731454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729579564.mount: Deactivated successfully. Sep 10 05:18:44.741534 kubelet[2118]: E0910 05:18:44.741456 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 05:18:44.747914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 05:18:44.748110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 05:18:44.748456 systemd[1]: kubelet.service: Consumed 330ms CPU time, 111.2M memory peak. Sep 10 05:18:45.446147 containerd[1583]: time="2025-09-10T05:18:45.446066234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:45.446975 containerd[1583]: time="2025-09-10T05:18:45.446943139Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 10 05:18:45.448242 containerd[1583]: time="2025-09-10T05:18:45.448205045Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:45.450031 containerd[1583]: time="2025-09-10T05:18:45.449992247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:45.450596 containerd[1583]: time="2025-09-10T05:18:45.450547388Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.898264744s" Sep 10 05:18:45.450596 containerd[1583]: time="2025-09-10T05:18:45.450593304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 10 05:18:45.451138 containerd[1583]: time="2025-09-10T05:18:45.451091989Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 05:18:46.062534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193203284.mount: Deactivated successfully. Sep 10 05:18:46.702567 containerd[1583]: time="2025-09-10T05:18:46.702507841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:46.703303 containerd[1583]: time="2025-09-10T05:18:46.703263338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 10 05:18:46.704480 containerd[1583]: time="2025-09-10T05:18:46.704429705Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:46.707155 containerd[1583]: time="2025-09-10T05:18:46.707101295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:46.708086 containerd[1583]: time="2025-09-10T05:18:46.708042470Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.25692374s" Sep 10 05:18:46.708086 containerd[1583]: time="2025-09-10T05:18:46.708074009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 05:18:46.708668 containerd[1583]: time="2025-09-10T05:18:46.708637135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 05:18:47.259002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190449561.mount: Deactivated successfully. Sep 10 05:18:47.264591 containerd[1583]: time="2025-09-10T05:18:47.264534602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 05:18:47.265290 containerd[1583]: time="2025-09-10T05:18:47.265254512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 05:18:47.266433 containerd[1583]: time="2025-09-10T05:18:47.266399319Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 05:18:47.268441 containerd[1583]: time="2025-09-10T05:18:47.268413827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 05:18:47.269035 containerd[1583]: time="2025-09-10T05:18:47.268996129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 560.324439ms" Sep 10 05:18:47.269070 containerd[1583]: time="2025-09-10T05:18:47.269032727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 05:18:47.269566 containerd[1583]: time="2025-09-10T05:18:47.269525642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 05:18:47.867760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917340899.mount: Deactivated successfully. Sep 10 05:18:50.028261 containerd[1583]: time="2025-09-10T05:18:50.028182608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:50.029287 containerd[1583]: time="2025-09-10T05:18:50.029231916Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 10 05:18:50.030725 containerd[1583]: time="2025-09-10T05:18:50.030656177Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:50.033402 containerd[1583]: time="2025-09-10T05:18:50.033360028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:18:50.034200 containerd[1583]: time="2025-09-10T05:18:50.034167562Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.764608758s" Sep 10 05:18:50.034238 containerd[1583]: time="2025-09-10T05:18:50.034205273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 05:18:52.738704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:18:52.738861 systemd[1]: kubelet.service: Consumed 330ms CPU time, 111.2M memory peak. Sep 10 05:18:52.740957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:18:52.764039 systemd[1]: Reload requested from client PID 2271 ('systemctl') (unit session-7.scope)... Sep 10 05:18:52.764053 systemd[1]: Reloading... Sep 10 05:18:52.846520 zram_generator::config[2317]: No configuration found. Sep 10 05:18:53.110896 systemd[1]: Reloading finished in 346 ms. Sep 10 05:18:53.181339 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 05:18:53.181451 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 05:18:53.181784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:18:53.181835 systemd[1]: kubelet.service: Consumed 152ms CPU time, 98.2M memory peak. Sep 10 05:18:53.183579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:18:53.363294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:18:53.368289 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 05:18:53.406382 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:18:53.406382 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 05:18:53.406382 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:18:53.406873 kubelet[2362]: I0910 05:18:53.406342 2362 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 05:18:53.727646 kubelet[2362]: I0910 05:18:53.727514 2362 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 05:18:53.727646 kubelet[2362]: I0910 05:18:53.727549 2362 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 05:18:53.727815 kubelet[2362]: I0910 05:18:53.727804 2362 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 05:18:53.747070 kubelet[2362]: E0910 05:18:53.747018 2362 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:53.752454 kubelet[2362]: I0910 05:18:53.752418 2362 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 05:18:53.760069 kubelet[2362]: I0910 05:18:53.760018 2362 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 05:18:53.766964 kubelet[2362]: I0910 05:18:53.766934 2362 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 05:18:53.767668 kubelet[2362]: I0910 05:18:53.767638 2362 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 05:18:53.767883 kubelet[2362]: I0910 05:18:53.767840 2362 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 05:18:53.768103 kubelet[2362]: I0910 05:18:53.767877 2362 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 05:18:53.768203 kubelet[2362]: I0910 05:18:53.768124 2362 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 05:18:53.768203 kubelet[2362]: I0910 05:18:53.768137 2362 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 05:18:53.768303 kubelet[2362]: I0910 05:18:53.768288 2362 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:18:53.770573 kubelet[2362]: I0910 05:18:53.770526 2362 kubelet.go:408] "Attempting to sync node with API server" Sep 10 05:18:53.770573 kubelet[2362]: I0910 05:18:53.770574 2362 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 05:18:53.770733 kubelet[2362]: I0910 05:18:53.770637 2362 kubelet.go:314] "Adding apiserver pod source" Sep 10 05:18:53.770733 kubelet[2362]: I0910 05:18:53.770674 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 05:18:53.773661 kubelet[2362]: I0910 05:18:53.773624 2362 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 05:18:53.774504 kubelet[2362]: W0910 05:18:53.773993 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 10 05:18:53.774504 kubelet[2362]: E0910 05:18:53.774047 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:53.774504 kubelet[2362]: I0910 05:18:53.774090 2362 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 05:18:53.774504 kubelet[2362]: W0910 05:18:53.774316 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 10 05:18:53.774504 kubelet[2362]: E0910 05:18:53.774358 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:53.774670 kubelet[2362]: W0910 05:18:53.774590 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 05:18:53.776564 kubelet[2362]: I0910 05:18:53.776529 2362 server.go:1274] "Started kubelet" Sep 10 05:18:53.777334 kubelet[2362]: I0910 05:18:53.776925 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 05:18:53.780054 kubelet[2362]: I0910 05:18:53.780025 2362 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 05:18:53.780205 kubelet[2362]: I0910 05:18:53.780176 2362 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 05:18:53.782047 kubelet[2362]: I0910 05:18:53.782029 2362 server.go:449] "Adding debug handlers to kubelet server" Sep 10 05:18:53.782941 kubelet[2362]: I0910 05:18:53.782917 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 05:18:53.783826 kubelet[2362]: I0910 05:18:53.783804 2362 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 05:18:53.784813 kubelet[2362]: E0910 05:18:53.783797 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863d424b80e98c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 05:18:53.776500933 +0000 UTC m=+0.404240055,LastTimestamp:2025-09-10 05:18:53.776500933 +0000 UTC m=+0.404240055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 05:18:53.785877 kubelet[2362]: I0910 05:18:53.785780 2362 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 05:18:53.786983 kubelet[2362]: I0910 05:18:53.786130 2362 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 05:18:53.787122 kubelet[2362]: I0910 05:18:53.787048 2362 reconciler.go:26] "Reconciler: start to sync state" Sep 10 05:18:53.787122 kubelet[2362]: E0910 05:18:53.786286 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:53.787434 kubelet[2362]: E0910 05:18:53.787373 2362 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 05:18:53.788113 kubelet[2362]: E0910 05:18:53.788075 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Sep 10 05:18:53.788269 kubelet[2362]: W0910 05:18:53.788205 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 10 05:18:53.788359 kubelet[2362]: E0910 05:18:53.788277 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:53.788830 kubelet[2362]: I0910 05:18:53.788797 2362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 05:18:53.789911 kubelet[2362]: I0910 05:18:53.789883 2362 factory.go:221] Registration of the containerd container factory successfully Sep 10 05:18:53.789911 kubelet[2362]: I0910 05:18:53.789905 2362 factory.go:221] Registration of the systemd container factory successfully Sep 10 05:18:53.805101 kubelet[2362]: I0910 05:18:53.804895 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 05:18:53.807464 kubelet[2362]: I0910 05:18:53.807142 2362 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 05:18:53.807464 kubelet[2362]: I0910 05:18:53.807158 2362 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 05:18:53.807464 kubelet[2362]: I0910 05:18:53.807175 2362 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:18:53.807464 kubelet[2362]: I0910 05:18:53.807252 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 05:18:53.807464 kubelet[2362]: I0910 05:18:53.807329 2362 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 05:18:53.807464 kubelet[2362]: I0910 05:18:53.807365 2362 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 05:18:53.807464 kubelet[2362]: E0910 05:18:53.807435 2362 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 05:18:53.888092 kubelet[2362]: E0910 05:18:53.888044 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:53.908359 kubelet[2362]: E0910 05:18:53.908291 2362 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 05:18:53.988764 kubelet[2362]: E0910 05:18:53.988617 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:53.989330 kubelet[2362]: E0910 05:18:53.989249 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Sep 10 05:18:54.089563 kubelet[2362]: E0910 05:18:54.089522 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:54.108765 kubelet[2362]: E0910 05:18:54.108712 2362 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 05:18:54.162628 kubelet[2362]: W0910 05:18:54.162560 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 10 05:18:54.162696 kubelet[2362]: E0910 05:18:54.162625 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:54.163248 kubelet[2362]: I0910 05:18:54.163208 2362 policy_none.go:49] "None policy: Start" Sep 10 05:18:54.164057 kubelet[2362]: I0910 05:18:54.164027 2362 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 05:18:54.164057 kubelet[2362]: I0910 05:18:54.164056 2362 state_mem.go:35] "Initializing new in-memory state store" Sep 10 05:18:54.171376 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 05:18:54.189979 kubelet[2362]: E0910 05:18:54.189931 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:54.190724 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 05:18:54.193818 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 05:18:54.203497 kubelet[2362]: I0910 05:18:54.203446 2362 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 05:18:54.203800 kubelet[2362]: I0910 05:18:54.203758 2362 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 05:18:54.203842 kubelet[2362]: I0910 05:18:54.203776 2362 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 05:18:54.204228 kubelet[2362]: I0910 05:18:54.204112 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 05:18:54.205123 kubelet[2362]: E0910 05:18:54.205102 2362 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 05:18:54.306233 kubelet[2362]: I0910 05:18:54.306185 2362 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:18:54.306687 kubelet[2362]: E0910 05:18:54.306653 2362 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Sep 10 05:18:54.390351 kubelet[2362]: E0910 05:18:54.390310 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Sep 10 05:18:54.508078 kubelet[2362]: I0910 05:18:54.508024 2362 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:18:54.508542 kubelet[2362]: E0910 05:18:54.508387 2362 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Sep 10 05:18:54.518190 systemd[1]: Created slice kubepods-burstable-pod0fd294df9ca2e14125e52beb3920e076.slice - libcontainer container kubepods-burstable-pod0fd294df9ca2e14125e52beb3920e076.slice. Sep 10 05:18:54.532808 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 10 05:18:54.556916 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 10 05:18:54.592291 kubelet[2362]: I0910 05:18:54.592264 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fd294df9ca2e14125e52beb3920e076-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fd294df9ca2e14125e52beb3920e076\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:18:54.592291 kubelet[2362]: I0910 05:18:54.592293 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fd294df9ca2e14125e52beb3920e076-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fd294df9ca2e14125e52beb3920e076\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:18:54.592389 kubelet[2362]: I0910 05:18:54.592313 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fd294df9ca2e14125e52beb3920e076-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0fd294df9ca2e14125e52beb3920e076\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:18:54.592389 kubelet[2362]: I0910 05:18:54.592332 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:54.592445 kubelet[2362]: I0910 05:18:54.592408 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 05:18:54.592471 kubelet[2362]: I0910 05:18:54.592455 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:54.592514 kubelet[2362]: I0910 05:18:54.592502 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:54.592549 kubelet[2362]: I0910 05:18:54.592521 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:54.592582 kubelet[2362]: I0910 05:18:54.592551 2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:54.649758 kubelet[2362]: W0910 05:18:54.649702 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 10 05:18:54.649839 kubelet[2362]: E0910 05:18:54.649768 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:54.832079 kubelet[2362]: E0910 05:18:54.831911 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:54.832885 containerd[1583]: time="2025-09-10T05:18:54.832766077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0fd294df9ca2e14125e52beb3920e076,Namespace:kube-system,Attempt:0,}" Sep 10 05:18:54.855181 kubelet[2362]: E0910 05:18:54.855118 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:54.855760 containerd[1583]: time="2025-09-10T05:18:54.855696951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 10 05:18:54.860131 kubelet[2362]: E0910 05:18:54.860103 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:54.860678 containerd[1583]: time="2025-09-10T05:18:54.860647385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 10 05:18:54.910345 kubelet[2362]: I0910 05:18:54.910312 2362 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:18:54.910772 kubelet[2362]: E0910 05:18:54.910730 2362 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Sep 10 05:18:55.007693 kubelet[2362]: W0910 05:18:55.007611 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 10 05:18:55.007693 kubelet[2362]: E0910 05:18:55.007687 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:55.141800 containerd[1583]: time="2025-09-10T05:18:55.141671836Z" level=info msg="connecting to shim d77397207642a8227aa220c7962b5ea289a7365a4411a519022fcd28eb36a445" address="unix:///run/containerd/s/3575af024fd54349b5e7667885a3aa3c1a6504ba4a4dc0715628950893bb50c4" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:18:55.145754 containerd[1583]: time="2025-09-10T05:18:55.145700962Z" level=info msg="connecting to shim 87959f9cb1ce5a5ad3a7c01e7a8e7f77e631cb4aaa023df33a95be7d2a1feefb" address="unix:///run/containerd/s/dbdef7641037b019bdac401dc63bbde593fec94ec5ebbd0c90f2174a34d83088" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:18:55.145754 containerd[1583]: time="2025-09-10T05:18:55.145742830Z" level=info msg="connecting to shim 5ae3134d404cb07815caf7c7e95cecf6c43d8b477f0e6e8e35296c6f016fda2d" address="unix:///run/containerd/s/f3f09d125a5f8a8c38cfc7ab869923ddaf5c2f1f9c152645ac20cb04201bd55c" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:18:55.172622 systemd[1]: Started cri-containerd-d77397207642a8227aa220c7962b5ea289a7365a4411a519022fcd28eb36a445.scope - libcontainer container d77397207642a8227aa220c7962b5ea289a7365a4411a519022fcd28eb36a445. Sep 10 05:18:55.178435 systemd[1]: Started cri-containerd-5ae3134d404cb07815caf7c7e95cecf6c43d8b477f0e6e8e35296c6f016fda2d.scope - libcontainer container 5ae3134d404cb07815caf7c7e95cecf6c43d8b477f0e6e8e35296c6f016fda2d. Sep 10 05:18:55.180551 systemd[1]: Started cri-containerd-87959f9cb1ce5a5ad3a7c01e7a8e7f77e631cb4aaa023df33a95be7d2a1feefb.scope - libcontainer container 87959f9cb1ce5a5ad3a7c01e7a8e7f77e631cb4aaa023df33a95be7d2a1feefb. Sep 10 05:18:55.192161 kubelet[2362]: E0910 05:18:55.192111 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Sep 10 05:18:55.231647 containerd[1583]: time="2025-09-10T05:18:55.231609060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0fd294df9ca2e14125e52beb3920e076,Namespace:kube-system,Attempt:0,} returns sandbox id \"87959f9cb1ce5a5ad3a7c01e7a8e7f77e631cb4aaa023df33a95be7d2a1feefb\"" Sep 10 05:18:55.232969 kubelet[2362]: E0910 05:18:55.232944 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:55.235322 containerd[1583]: time="2025-09-10T05:18:55.235269875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d77397207642a8227aa220c7962b5ea289a7365a4411a519022fcd28eb36a445\"" Sep 10 05:18:55.235322 containerd[1583]: time="2025-09-10T05:18:55.235297988Z" level=info msg="CreateContainer within sandbox \"87959f9cb1ce5a5ad3a7c01e7a8e7f77e631cb4aaa023df33a95be7d2a1feefb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 05:18:55.235761 containerd[1583]: time="2025-09-10T05:18:55.235716513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae3134d404cb07815caf7c7e95cecf6c43d8b477f0e6e8e35296c6f016fda2d\"" Sep 10 05:18:55.236143 kubelet[2362]: E0910 05:18:55.236118 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:55.236624 kubelet[2362]: E0910 05:18:55.236594 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:55.238054 containerd[1583]: time="2025-09-10T05:18:55.238031144Z" level=info msg="CreateContainer within sandbox \"d77397207642a8227aa220c7962b5ea289a7365a4411a519022fcd28eb36a445\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 05:18:55.238907 kubelet[2362]: W0910 05:18:55.238172 2362 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 10 05:18:55.238907 kubelet[2362]: E0910 05:18:55.238236 2362 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 10 05:18:55.242344 containerd[1583]: time="2025-09-10T05:18:55.242309107Z" level=info msg="CreateContainer within sandbox \"5ae3134d404cb07815caf7c7e95cecf6c43d8b477f0e6e8e35296c6f016fda2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 05:18:55.249683 containerd[1583]: time="2025-09-10T05:18:55.249649783Z" level=info msg="Container 44a229677f67ed38aad49c7882db8a4beabc8b50b0e2890725113072dd16a47f: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:18:55.253198 containerd[1583]: time="2025-09-10T05:18:55.253164334Z" level=info msg="Container f42393818f06226b7cb32b49a8fa2d59e18f264db739a6a738b23720f6354ba7: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:18:55.255928 containerd[1583]: time="2025-09-10T05:18:55.255888593Z" level=info msg="Container 31ab03023d7cfe1f12283504964882a6d93b9f5aabdda4897a55c099bbe895d6: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:18:55.261663 containerd[1583]: time="2025-09-10T05:18:55.261625802Z" level=info msg="CreateContainer within sandbox \"87959f9cb1ce5a5ad3a7c01e7a8e7f77e631cb4aaa023df33a95be7d2a1feefb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"44a229677f67ed38aad49c7882db8a4beabc8b50b0e2890725113072dd16a47f\"" Sep 10 05:18:55.262208 containerd[1583]: time="2025-09-10T05:18:55.262182306Z" level=info msg="StartContainer for \"44a229677f67ed38aad49c7882db8a4beabc8b50b0e2890725113072dd16a47f\"" Sep 10 05:18:55.264519 containerd[1583]: time="2025-09-10T05:18:55.263655238Z" level=info msg="connecting to shim 44a229677f67ed38aad49c7882db8a4beabc8b50b0e2890725113072dd16a47f" address="unix:///run/containerd/s/dbdef7641037b019bdac401dc63bbde593fec94ec5ebbd0c90f2174a34d83088" protocol=ttrpc version=3 Sep 10 05:18:55.266401 containerd[1583]: time="2025-09-10T05:18:55.266301230Z" level=info msg="CreateContainer within sandbox \"d77397207642a8227aa220c7962b5ea289a7365a4411a519022fcd28eb36a445\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f42393818f06226b7cb32b49a8fa2d59e18f264db739a6a738b23720f6354ba7\"" Sep 10 05:18:55.267518 containerd[1583]: time="2025-09-10T05:18:55.267474641Z" level=info msg="StartContainer for \"f42393818f06226b7cb32b49a8fa2d59e18f264db739a6a738b23720f6354ba7\"" Sep 10 05:18:55.268009 containerd[1583]: time="2025-09-10T05:18:55.267973657Z" level=info msg="CreateContainer within sandbox \"5ae3134d404cb07815caf7c7e95cecf6c43d8b477f0e6e8e35296c6f016fda2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31ab03023d7cfe1f12283504964882a6d93b9f5aabdda4897a55c099bbe895d6\"" Sep 10 05:18:55.268674 containerd[1583]: time="2025-09-10T05:18:55.268637161Z" level=info msg="connecting to shim f42393818f06226b7cb32b49a8fa2d59e18f264db739a6a738b23720f6354ba7" address="unix:///run/containerd/s/3575af024fd54349b5e7667885a3aa3c1a6504ba4a4dc0715628950893bb50c4" protocol=ttrpc version=3 Sep 10 05:18:55.269061 containerd[1583]: time="2025-09-10T05:18:55.269028114Z" level=info msg="StartContainer for \"31ab03023d7cfe1f12283504964882a6d93b9f5aabdda4897a55c099bbe895d6\"" Sep 10 05:18:55.270977 containerd[1583]: time="2025-09-10T05:18:55.270944849Z" level=info msg="connecting to shim 31ab03023d7cfe1f12283504964882a6d93b9f5aabdda4897a55c099bbe895d6" address="unix:///run/containerd/s/f3f09d125a5f8a8c38cfc7ab869923ddaf5c2f1f9c152645ac20cb04201bd55c" protocol=ttrpc version=3 Sep 10 05:18:55.288636 systemd[1]: Started cri-containerd-44a229677f67ed38aad49c7882db8a4beabc8b50b0e2890725113072dd16a47f.scope - libcontainer container 44a229677f67ed38aad49c7882db8a4beabc8b50b0e2890725113072dd16a47f. Sep 10 05:18:55.292993 systemd[1]: Started cri-containerd-31ab03023d7cfe1f12283504964882a6d93b9f5aabdda4897a55c099bbe895d6.scope - libcontainer container 31ab03023d7cfe1f12283504964882a6d93b9f5aabdda4897a55c099bbe895d6. Sep 10 05:18:55.294814 systemd[1]: Started cri-containerd-f42393818f06226b7cb32b49a8fa2d59e18f264db739a6a738b23720f6354ba7.scope - libcontainer container f42393818f06226b7cb32b49a8fa2d59e18f264db739a6a738b23720f6354ba7. Sep 10 05:18:55.340855 containerd[1583]: time="2025-09-10T05:18:55.340666512Z" level=info msg="StartContainer for \"44a229677f67ed38aad49c7882db8a4beabc8b50b0e2890725113072dd16a47f\" returns successfully" Sep 10 05:18:55.360247 containerd[1583]: time="2025-09-10T05:18:55.360208019Z" level=info msg="StartContainer for \"f42393818f06226b7cb32b49a8fa2d59e18f264db739a6a738b23720f6354ba7\" returns successfully" Sep 10 05:18:55.365231 containerd[1583]: time="2025-09-10T05:18:55.365182167Z" level=info msg="StartContainer for \"31ab03023d7cfe1f12283504964882a6d93b9f5aabdda4897a55c099bbe895d6\" returns successfully" Sep 10 05:18:55.713533 kubelet[2362]: I0910 05:18:55.713258 2362 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:18:55.816626 kubelet[2362]: E0910 05:18:55.816471 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:55.819121 kubelet[2362]: E0910 05:18:55.819095 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:55.821324 kubelet[2362]: E0910 05:18:55.821298 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:56.342364 kubelet[2362]: I0910 05:18:56.342306 2362 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 05:18:56.342364 kubelet[2362]: E0910 05:18:56.342344 2362 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 05:18:56.350869 kubelet[2362]: E0910 05:18:56.350836 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:56.450997 kubelet[2362]: E0910 05:18:56.450943 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:56.551778 kubelet[2362]: E0910 05:18:56.551740 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:56.652502 kubelet[2362]: E0910 05:18:56.652379 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:56.753209 kubelet[2362]: E0910 05:18:56.753167 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:56.823058 kubelet[2362]: E0910 05:18:56.823021 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:56.854221 kubelet[2362]: E0910 05:18:56.854200 2362 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 05:18:57.773599 kubelet[2362]: I0910 05:18:57.773565 2362 apiserver.go:52] "Watching apiserver" Sep 10 05:18:57.787610 kubelet[2362]: I0910 05:18:57.787575 2362 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 05:18:58.346821 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-7.scope)... Sep 10 05:18:58.346837 systemd[1]: Reloading... Sep 10 05:18:58.427543 zram_generator::config[2684]: No configuration found. Sep 10 05:18:58.656663 systemd[1]: Reloading finished in 309 ms. Sep 10 05:18:58.683119 kubelet[2362]: I0910 05:18:58.683075 2362 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 05:18:58.683265 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:18:58.699845 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 05:18:58.700162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:18:58.700211 systemd[1]: kubelet.service: Consumed 858ms CPU time, 132.1M memory peak. Sep 10 05:18:58.702167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 05:18:58.915348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 05:18:58.924888 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 05:18:58.970646 kubelet[2726]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:18:58.970646 kubelet[2726]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 05:18:58.970646 kubelet[2726]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 05:18:58.971372 kubelet[2726]: I0910 05:18:58.970741 2726 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 05:18:58.979442 kubelet[2726]: I0910 05:18:58.979384 2726 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 05:18:58.979442 kubelet[2726]: I0910 05:18:58.979424 2726 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 05:18:58.979724 kubelet[2726]: I0910 05:18:58.979703 2726 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 05:18:58.982148 kubelet[2726]: I0910 05:18:58.982118 2726 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 05:18:58.983925 kubelet[2726]: I0910 05:18:58.983900 2726 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 05:18:58.989325 kubelet[2726]: I0910 05:18:58.989296 2726 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 05:18:58.998467 kubelet[2726]: I0910 05:18:58.998424 2726 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 05:18:58.998624 kubelet[2726]: I0910 05:18:58.998574 2726 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 05:18:58.998738 kubelet[2726]: I0910 05:18:58.998694 2726 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 05:18:58.998917 kubelet[2726]: I0910 05:18:58.998728 2726 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 05:18:58.999015 kubelet[2726]: I0910 05:18:58.998928 2726 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 05:18:58.999015 kubelet[2726]: I0910 05:18:58.998948 2726 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 05:18:58.999015 kubelet[2726]: I0910 05:18:58.998982 2726 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:18:58.999115 kubelet[2726]: I0910 05:18:58.999102 2726 kubelet.go:408] "Attempting to sync node with API server" Sep 10 05:18:58.999142 kubelet[2726]: I0910 05:18:58.999115 2726 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 05:18:58.999169 kubelet[2726]: I0910 05:18:58.999155 2726 kubelet.go:314] "Adding apiserver pod source" Sep 10 05:18:58.999191 kubelet[2726]: I0910 05:18:58.999171 2726 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 05:18:59.000676 kubelet[2726]: I0910 05:18:59.000644 2726 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 05:18:59.001139 kubelet[2726]: I0910 05:18:59.001114 2726 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 05:18:59.001673 kubelet[2726]: I0910 05:18:59.001659 2726 server.go:1274] "Started kubelet" Sep 10 05:18:59.002512 kubelet[2726]: I0910 05:18:59.002107 2726 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 05:18:59.004030 kubelet[2726]: I0910 05:18:59.004011 2726 server.go:449] "Adding debug handlers to kubelet server" Sep 10 05:18:59.004721 kubelet[2726]: I0910 05:18:59.004564 2726 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 05:18:59.005543 kubelet[2726]: I0910 05:18:59.004992 2726 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 05:18:59.010461 kubelet[2726]: E0910 05:18:59.010310 2726 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 05:18:59.018516 kubelet[2726]: I0910 05:18:59.018255 2726 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 05:18:59.020505 kubelet[2726]: I0910 05:18:59.019787 2726 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 05:18:59.021980 kubelet[2726]: I0910 05:18:59.021948 2726 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 05:18:59.022427 kubelet[2726]: I0910 05:18:59.022398 2726 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 05:18:59.023123 kubelet[2726]: I0910 05:18:59.023105 2726 reconciler.go:26] "Reconciler: start to sync state" Sep 10 05:18:59.028643 kubelet[2726]: I0910 05:18:59.028558 2726 factory.go:221] Registration of the systemd container factory successfully Sep 10 05:18:59.030506 kubelet[2726]: I0910 05:18:59.028798 2726 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 05:18:59.040512 kubelet[2726]: I0910 05:18:59.038936 2726 factory.go:221] Registration of the containerd container factory successfully Sep 10 05:18:59.047540 kubelet[2726]: I0910 05:18:59.047502 2726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 05:18:59.048729 kubelet[2726]: I0910 05:18:59.048691 2726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 05:18:59.048729 kubelet[2726]: I0910 05:18:59.048726 2726 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 05:18:59.048794 kubelet[2726]: I0910 05:18:59.048752 2726 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 05:18:59.048843 kubelet[2726]: E0910 05:18:59.048801 2726 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 05:18:59.078353 kubelet[2726]: I0910 05:18:59.078321 2726 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 05:18:59.078353 kubelet[2726]: I0910 05:18:59.078338 2726 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 05:18:59.078353 kubelet[2726]: I0910 05:18:59.078355 2726 state_mem.go:36] "Initialized new in-memory state store" Sep 10 05:18:59.078578 kubelet[2726]: I0910 05:18:59.078517 2726 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 05:18:59.078578 kubelet[2726]: I0910 05:18:59.078528 2726 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 05:18:59.078578 kubelet[2726]: I0910 05:18:59.078545 2726 policy_none.go:49] "None policy: Start" Sep 10 05:18:59.079539 kubelet[2726]: I0910 05:18:59.079132 2726 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 05:18:59.079539 kubelet[2726]: I0910 05:18:59.079161 2726 state_mem.go:35] "Initializing new in-memory state store" Sep 10 05:18:59.079539 kubelet[2726]: I0910 05:18:59.079313 2726 state_mem.go:75] "Updated machine memory state" Sep 10 05:18:59.083519 kubelet[2726]: I0910 05:18:59.083476 2726 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 05:18:59.083710 kubelet[2726]: I0910 05:18:59.083674 2726 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 05:18:59.083759 kubelet[2726]: I0910 05:18:59.083696 2726 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 05:18:59.084163 kubelet[2726]: I0910 05:18:59.083899 2726 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 05:18:59.197821 kubelet[2726]: I0910 05:18:59.197679 2726 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 05:18:59.204005 kubelet[2726]: I0910 05:18:59.203965 2726 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 05:18:59.204181 kubelet[2726]: I0910 05:18:59.204051 2726 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 05:18:59.224304 kubelet[2726]: I0910 05:18:59.224245 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fd294df9ca2e14125e52beb3920e076-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0fd294df9ca2e14125e52beb3920e076\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:18:59.224304 kubelet[2726]: I0910 05:18:59.224280 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:59.224304 kubelet[2726]: I0910 05:18:59.224306 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:59.224571 kubelet[2726]: I0910 05:18:59.224321 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 05:18:59.224571 kubelet[2726]: I0910 05:18:59.224336 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fd294df9ca2e14125e52beb3920e076-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fd294df9ca2e14125e52beb3920e076\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:18:59.224571 kubelet[2726]: I0910 05:18:59.224351 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:59.224571 kubelet[2726]: I0910 05:18:59.224365 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:59.224571 kubelet[2726]: I0910 05:18:59.224380 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 05:18:59.224697 kubelet[2726]: I0910 05:18:59.224394 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fd294df9ca2e14125e52beb3920e076-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0fd294df9ca2e14125e52beb3920e076\") " pod="kube-system/kube-apiserver-localhost" Sep 10 05:18:59.463066 kubelet[2726]: E0910 05:18:59.462949 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:59.464080 kubelet[2726]: E0910 05:18:59.464022 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:18:59.464240 kubelet[2726]: E0910 05:18:59.464168 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:00.000061 kubelet[2726]: I0910 05:19:00.000012 2726 apiserver.go:52] "Watching apiserver" Sep 10 05:19:00.024522 kubelet[2726]: I0910 05:19:00.022629 2726 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 05:19:00.065113 kubelet[2726]: E0910 05:19:00.065058 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:00.065113 kubelet[2726]: E0910 05:19:00.065106 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:00.074703 kubelet[2726]: E0910 05:19:00.074648 2726 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 05:19:00.074937 kubelet[2726]: E0910 05:19:00.074800 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:00.088463 kubelet[2726]: I0910 05:19:00.086891 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.086871019 podStartE2EDuration="1.086871019s" podCreationTimestamp="2025-09-10 05:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:19:00.086621271 +0000 UTC m=+1.155429303" watchObservedRunningTime="2025-09-10 05:19:00.086871019 +0000 UTC m=+1.155679061" Sep 10 05:19:00.106516 kubelet[2726]: I0910 05:19:00.106314 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.106292502 podStartE2EDuration="1.106292502s" podCreationTimestamp="2025-09-10 05:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:19:00.099468514 +0000 UTC m=+1.168276556" watchObservedRunningTime="2025-09-10 05:19:00.106292502 +0000 UTC m=+1.175100544" Sep 10 05:19:00.106516 kubelet[2726]: I0910 05:19:00.106455 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.106452552 podStartE2EDuration="1.106452552s" podCreationTimestamp="2025-09-10 05:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:19:00.106161526 +0000 UTC m=+1.174969568" watchObservedRunningTime="2025-09-10 05:19:00.106452552 +0000 UTC m=+1.175260594" Sep 10 05:19:01.066795 kubelet[2726]: E0910 05:19:01.066754 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:04.652827 kubelet[2726]: I0910 05:19:04.652773 2726 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 05:19:04.653335 containerd[1583]: time="2025-09-10T05:19:04.653204525Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 05:19:04.653789 kubelet[2726]: I0910 05:19:04.653382 2726 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 05:19:05.218630 systemd[1]: Created slice kubepods-besteffort-podb9c7a3be_d3de_44ee_b4f2_a3741705a5b1.slice - libcontainer container kubepods-besteffort-podb9c7a3be_d3de_44ee_b4f2_a3741705a5b1.slice. Sep 10 05:19:05.259007 kubelet[2726]: I0910 05:19:05.258944 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9c7a3be-d3de-44ee-b4f2-a3741705a5b1-lib-modules\") pod \"kube-proxy-vp9tj\" (UID: \"b9c7a3be-d3de-44ee-b4f2-a3741705a5b1\") " pod="kube-system/kube-proxy-vp9tj" Sep 10 05:19:05.259007 kubelet[2726]: I0910 05:19:05.258985 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhb5s\" (UniqueName: \"kubernetes.io/projected/b9c7a3be-d3de-44ee-b4f2-a3741705a5b1-kube-api-access-nhb5s\") pod \"kube-proxy-vp9tj\" (UID: \"b9c7a3be-d3de-44ee-b4f2-a3741705a5b1\") " pod="kube-system/kube-proxy-vp9tj" Sep 10 05:19:05.259007 kubelet[2726]: I0910 05:19:05.259005 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9c7a3be-d3de-44ee-b4f2-a3741705a5b1-xtables-lock\") pod \"kube-proxy-vp9tj\" (UID: \"b9c7a3be-d3de-44ee-b4f2-a3741705a5b1\") " pod="kube-system/kube-proxy-vp9tj" Sep 10 05:19:05.259146 kubelet[2726]: I0910 05:19:05.259019 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9c7a3be-d3de-44ee-b4f2-a3741705a5b1-kube-proxy\") pod \"kube-proxy-vp9tj\" (UID: \"b9c7a3be-d3de-44ee-b4f2-a3741705a5b1\") " pod="kube-system/kube-proxy-vp9tj" Sep 10 05:19:05.476522 kubelet[2726]: E0910 05:19:05.476326 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:05.531217 kubelet[2726]: E0910 05:19:05.531174 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:05.531954 containerd[1583]: time="2025-09-10T05:19:05.531853592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vp9tj,Uid:b9c7a3be-d3de-44ee-b4f2-a3741705a5b1,Namespace:kube-system,Attempt:0,}" Sep 10 05:19:05.562007 containerd[1583]: time="2025-09-10T05:19:05.561963476Z" level=info msg="connecting to shim e8ce773d5f93c4bef11376d4251b45ebd686c6e781dfe06600427a24a9d7b323" address="unix:///run/containerd/s/8c1027e0fabe1439989ad59f3c4553c937225a265044875f498021e6f81d711f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:05.599678 systemd[1]: Started cri-containerd-e8ce773d5f93c4bef11376d4251b45ebd686c6e781dfe06600427a24a9d7b323.scope - libcontainer container e8ce773d5f93c4bef11376d4251b45ebd686c6e781dfe06600427a24a9d7b323. Sep 10 05:19:05.635194 containerd[1583]: time="2025-09-10T05:19:05.635139561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vp9tj,Uid:b9c7a3be-d3de-44ee-b4f2-a3741705a5b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8ce773d5f93c4bef11376d4251b45ebd686c6e781dfe06600427a24a9d7b323\"" Sep 10 05:19:05.635895 kubelet[2726]: E0910 05:19:05.635860 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:05.640575 containerd[1583]: time="2025-09-10T05:19:05.640534389Z" level=info msg="CreateContainer within sandbox \"e8ce773d5f93c4bef11376d4251b45ebd686c6e781dfe06600427a24a9d7b323\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 05:19:05.652511 containerd[1583]: time="2025-09-10T05:19:05.652455636Z" level=info msg="Container 00df9df8c8c5f6d07b42fe5fa8ab469eb37f2ac10aabc633ee56594a5229f891: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:05.660837 containerd[1583]: time="2025-09-10T05:19:05.660800626Z" level=info msg="CreateContainer within sandbox \"e8ce773d5f93c4bef11376d4251b45ebd686c6e781dfe06600427a24a9d7b323\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00df9df8c8c5f6d07b42fe5fa8ab469eb37f2ac10aabc633ee56594a5229f891\"" Sep 10 05:19:05.661522 containerd[1583]: time="2025-09-10T05:19:05.661461397Z" level=info msg="StartContainer for \"00df9df8c8c5f6d07b42fe5fa8ab469eb37f2ac10aabc633ee56594a5229f891\"" Sep 10 05:19:05.662997 containerd[1583]: time="2025-09-10T05:19:05.662968117Z" level=info msg="connecting to shim 00df9df8c8c5f6d07b42fe5fa8ab469eb37f2ac10aabc633ee56594a5229f891" address="unix:///run/containerd/s/8c1027e0fabe1439989ad59f3c4553c937225a265044875f498021e6f81d711f" protocol=ttrpc version=3 Sep 10 05:19:05.686623 systemd[1]: Started cri-containerd-00df9df8c8c5f6d07b42fe5fa8ab469eb37f2ac10aabc633ee56594a5229f891.scope - libcontainer container 00df9df8c8c5f6d07b42fe5fa8ab469eb37f2ac10aabc633ee56594a5229f891. Sep 10 05:19:05.818120 containerd[1583]: time="2025-09-10T05:19:05.818077557Z" level=info msg="StartContainer for \"00df9df8c8c5f6d07b42fe5fa8ab469eb37f2ac10aabc633ee56594a5229f891\" returns successfully" Sep 10 05:19:05.818101 systemd[1]: Created slice kubepods-besteffort-pod928bef3a_ff24_450d_8c63_683d40ecb759.slice - libcontainer container kubepods-besteffort-pod928bef3a_ff24_450d_8c63_683d40ecb759.slice. Sep 10 05:19:05.962401 kubelet[2726]: I0910 05:19:05.962314 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/928bef3a-ff24-450d-8c63-683d40ecb759-var-lib-calico\") pod \"tigera-operator-58fc44c59b-hchmd\" (UID: \"928bef3a-ff24-450d-8c63-683d40ecb759\") " pod="tigera-operator/tigera-operator-58fc44c59b-hchmd" Sep 10 05:19:05.962401 kubelet[2726]: I0910 05:19:05.962353 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brzf\" (UniqueName: \"kubernetes.io/projected/928bef3a-ff24-450d-8c63-683d40ecb759-kube-api-access-5brzf\") pod \"tigera-operator-58fc44c59b-hchmd\" (UID: \"928bef3a-ff24-450d-8c63-683d40ecb759\") " pod="tigera-operator/tigera-operator-58fc44c59b-hchmd" Sep 10 05:19:06.075658 kubelet[2726]: E0910 05:19:06.075360 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:06.075658 kubelet[2726]: E0910 05:19:06.075376 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:06.089862 kubelet[2726]: I0910 05:19:06.089811 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vp9tj" podStartSLOduration=1.089789028 podStartE2EDuration="1.089789028s" podCreationTimestamp="2025-09-10 05:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:19:06.089716981 +0000 UTC m=+7.158525023" watchObservedRunningTime="2025-09-10 05:19:06.089789028 +0000 UTC m=+7.158597070" Sep 10 05:19:06.123592 containerd[1583]: time="2025-09-10T05:19:06.123542581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-hchmd,Uid:928bef3a-ff24-450d-8c63-683d40ecb759,Namespace:tigera-operator,Attempt:0,}" Sep 10 05:19:06.144456 containerd[1583]: time="2025-09-10T05:19:06.144033002Z" level=info msg="connecting to shim 1cefb415f28bb5f17dbb060004f19b2fe098eac1fbf60de166838e1c55aa2798" address="unix:///run/containerd/s/5ac6b9a886dd4a8acc31cda7165851bb29d81e9c77dc4ac76cc67003226633d2" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:06.183625 systemd[1]: Started cri-containerd-1cefb415f28bb5f17dbb060004f19b2fe098eac1fbf60de166838e1c55aa2798.scope - libcontainer container 1cefb415f28bb5f17dbb060004f19b2fe098eac1fbf60de166838e1c55aa2798. Sep 10 05:19:06.229222 containerd[1583]: time="2025-09-10T05:19:06.229176852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-hchmd,Uid:928bef3a-ff24-450d-8c63-683d40ecb759,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1cefb415f28bb5f17dbb060004f19b2fe098eac1fbf60de166838e1c55aa2798\"" Sep 10 05:19:06.230933 containerd[1583]: time="2025-09-10T05:19:06.230907815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 05:19:06.372476 kubelet[2726]: E0910 05:19:06.372377 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:07.077513 kubelet[2726]: E0910 05:19:07.077390 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:07.843677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759239389.mount: Deactivated successfully. Sep 10 05:19:08.951325 containerd[1583]: time="2025-09-10T05:19:08.951263181Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:08.952048 containerd[1583]: time="2025-09-10T05:19:08.952017165Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 10 05:19:08.953250 containerd[1583]: time="2025-09-10T05:19:08.953198437Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:08.955192 containerd[1583]: time="2025-09-10T05:19:08.955162297Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:08.955925 containerd[1583]: time="2025-09-10T05:19:08.955893388Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.724899781s" Sep 10 05:19:08.955925 containerd[1583]: time="2025-09-10T05:19:08.955921943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 10 05:19:08.957751 containerd[1583]: time="2025-09-10T05:19:08.957703809Z" level=info msg="CreateContainer within sandbox \"1cefb415f28bb5f17dbb060004f19b2fe098eac1fbf60de166838e1c55aa2798\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 05:19:08.984964 kubelet[2726]: E0910 05:19:08.984928 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:08.992455 containerd[1583]: time="2025-09-10T05:19:08.991983852Z" level=info msg="Container ddcf0c37cded6df6a91c363e11e6f25e7095b699e1542a32264e5f95b5a357d5: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:09.004520 containerd[1583]: time="2025-09-10T05:19:09.004459329Z" level=info msg="CreateContainer within sandbox \"1cefb415f28bb5f17dbb060004f19b2fe098eac1fbf60de166838e1c55aa2798\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ddcf0c37cded6df6a91c363e11e6f25e7095b699e1542a32264e5f95b5a357d5\"" Sep 10 05:19:09.004975 containerd[1583]: time="2025-09-10T05:19:09.004943673Z" level=info msg="StartContainer for \"ddcf0c37cded6df6a91c363e11e6f25e7095b699e1542a32264e5f95b5a357d5\"" Sep 10 05:19:09.005901 containerd[1583]: time="2025-09-10T05:19:09.005879070Z" level=info msg="connecting to shim ddcf0c37cded6df6a91c363e11e6f25e7095b699e1542a32264e5f95b5a357d5" address="unix:///run/containerd/s/5ac6b9a886dd4a8acc31cda7165851bb29d81e9c77dc4ac76cc67003226633d2" protocol=ttrpc version=3 Sep 10 05:19:09.061623 systemd[1]: Started cri-containerd-ddcf0c37cded6df6a91c363e11e6f25e7095b699e1542a32264e5f95b5a357d5.scope - libcontainer container ddcf0c37cded6df6a91c363e11e6f25e7095b699e1542a32264e5f95b5a357d5. Sep 10 05:19:09.082465 kubelet[2726]: E0910 05:19:09.082426 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:09.097094 containerd[1583]: time="2025-09-10T05:19:09.097041880Z" level=info msg="StartContainer for \"ddcf0c37cded6df6a91c363e11e6f25e7095b699e1542a32264e5f95b5a357d5\" returns successfully" Sep 10 05:19:10.092728 kubelet[2726]: I0910 05:19:10.092664 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-hchmd" podStartSLOduration=2.366375485 podStartE2EDuration="5.092644634s" podCreationTimestamp="2025-09-10 05:19:05 +0000 UTC" firstStartedPulling="2025-09-10 05:19:06.230451523 +0000 UTC m=+7.299259565" lastFinishedPulling="2025-09-10 05:19:08.956720672 +0000 UTC m=+10.025528714" observedRunningTime="2025-09-10 05:19:10.092454806 +0000 UTC m=+11.161262838" watchObservedRunningTime="2025-09-10 05:19:10.092644634 +0000 UTC m=+11.161452666" Sep 10 05:19:14.265473 sudo[1800]: pam_unix(sudo:session): session closed for user root Sep 10 05:19:14.267332 sshd[1799]: Connection closed by 10.0.0.1 port 35962 Sep 10 05:19:14.269728 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Sep 10 05:19:14.274754 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:35962.service: Deactivated successfully. Sep 10 05:19:14.280615 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 05:19:14.280939 systemd[1]: session-7.scope: Consumed 5.418s CPU time, 226.2M memory peak. Sep 10 05:19:14.284679 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Sep 10 05:19:14.286008 systemd-logind[1563]: Removed session 7. Sep 10 05:19:16.478645 update_engine[1570]: I20250910 05:19:16.478544 1570 update_attempter.cc:509] Updating boot flags... Sep 10 05:19:16.812914 systemd[1]: Created slice kubepods-besteffort-pod00f6510a_cf94_48e4_b43b_7a4329bc44ea.slice - libcontainer container kubepods-besteffort-pod00f6510a_cf94_48e4_b43b_7a4329bc44ea.slice. Sep 10 05:19:16.927726 kubelet[2726]: I0910 05:19:16.927670 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f6510a-cf94-48e4-b43b-7a4329bc44ea-tigera-ca-bundle\") pod \"calico-typha-774b4dff5-vqdvz\" (UID: \"00f6510a-cf94-48e4-b43b-7a4329bc44ea\") " pod="calico-system/calico-typha-774b4dff5-vqdvz" Sep 10 05:19:16.927726 kubelet[2726]: I0910 05:19:16.927714 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbt28\" (UniqueName: \"kubernetes.io/projected/00f6510a-cf94-48e4-b43b-7a4329bc44ea-kube-api-access-qbt28\") pod \"calico-typha-774b4dff5-vqdvz\" (UID: \"00f6510a-cf94-48e4-b43b-7a4329bc44ea\") " pod="calico-system/calico-typha-774b4dff5-vqdvz" Sep 10 05:19:16.927726 kubelet[2726]: I0910 05:19:16.927732 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/00f6510a-cf94-48e4-b43b-7a4329bc44ea-typha-certs\") pod \"calico-typha-774b4dff5-vqdvz\" (UID: \"00f6510a-cf94-48e4-b43b-7a4329bc44ea\") " pod="calico-system/calico-typha-774b4dff5-vqdvz" Sep 10 05:19:17.117977 kubelet[2726]: E0910 05:19:17.117763 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:17.118701 containerd[1583]: time="2025-09-10T05:19:17.118261477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-774b4dff5-vqdvz,Uid:00f6510a-cf94-48e4-b43b-7a4329bc44ea,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:17.159805 containerd[1583]: time="2025-09-10T05:19:17.159750246Z" level=info msg="connecting to shim 38b57eb29c4e76c030ed62931867ce9a9d375035085310c2134d852d494d5388" address="unix:///run/containerd/s/7b5f76f4d3683c462483e76166e8c86d30c3e2c8b2662291049ddb1a524639d5" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:17.190727 systemd[1]: Started cri-containerd-38b57eb29c4e76c030ed62931867ce9a9d375035085310c2134d852d494d5388.scope - libcontainer container 38b57eb29c4e76c030ed62931867ce9a9d375035085310c2134d852d494d5388. Sep 10 05:19:17.216158 systemd[1]: Created slice kubepods-besteffort-pod55b9e5e9_cd06_4d96_b1e2_cab1a372996f.slice - libcontainer container kubepods-besteffort-pod55b9e5e9_cd06_4d96_b1e2_cab1a372996f.slice. Sep 10 05:19:17.330473 kubelet[2726]: I0910 05:19:17.330398 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-flexvol-driver-host\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.330473 kubelet[2726]: I0910 05:19:17.330438 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-var-run-calico\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.330473 kubelet[2726]: I0910 05:19:17.330456 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-xtables-lock\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.330473 kubelet[2726]: I0910 05:19:17.330469 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n265d\" (UniqueName: \"kubernetes.io/projected/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-kube-api-access-n265d\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331665 kubelet[2726]: I0910 05:19:17.331595 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-cni-log-dir\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331842 kubelet[2726]: I0910 05:19:17.331679 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-cni-net-dir\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331842 kubelet[2726]: I0910 05:19:17.331710 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-var-lib-calico\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331842 kubelet[2726]: I0910 05:19:17.331734 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-cni-bin-dir\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331842 kubelet[2726]: I0910 05:19:17.331755 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-tigera-ca-bundle\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331842 kubelet[2726]: I0910 05:19:17.331783 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-policysync\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331961 kubelet[2726]: I0910 05:19:17.331806 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-node-certs\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.331961 kubelet[2726]: I0910 05:19:17.331834 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55b9e5e9-cd06-4d96-b1e2-cab1a372996f-lib-modules\") pod \"calico-node-96fmq\" (UID: \"55b9e5e9-cd06-4d96-b1e2-cab1a372996f\") " pod="calico-system/calico-node-96fmq" Sep 10 05:19:17.434703 kubelet[2726]: E0910 05:19:17.434575 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.434703 kubelet[2726]: W0910 05:19:17.434601 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.434703 kubelet[2726]: E0910 05:19:17.434653 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.436591 kubelet[2726]: E0910 05:19:17.436563 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.436591 kubelet[2726]: W0910 05:19:17.436587 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.436692 kubelet[2726]: E0910 05:19:17.436610 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.533707 kubelet[2726]: E0910 05:19:17.533660 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.533707 kubelet[2726]: W0910 05:19:17.533690 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.533707 kubelet[2726]: E0910 05:19:17.533718 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.574309 containerd[1583]: time="2025-09-10T05:19:17.574149251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-774b4dff5-vqdvz,Uid:00f6510a-cf94-48e4-b43b-7a4329bc44ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"38b57eb29c4e76c030ed62931867ce9a9d375035085310c2134d852d494d5388\"" Sep 10 05:19:17.575589 kubelet[2726]: E0910 05:19:17.575422 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:17.577783 containerd[1583]: time="2025-09-10T05:19:17.577361039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 05:19:17.583922 kubelet[2726]: E0910 05:19:17.583865 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.583922 kubelet[2726]: W0910 05:19:17.583922 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.584101 kubelet[2726]: E0910 05:19:17.583946 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.698380 kubelet[2726]: E0910 05:19:17.697905 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:17.733576 kubelet[2726]: E0910 05:19:17.733524 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.733576 kubelet[2726]: W0910 05:19:17.733550 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.733576 kubelet[2726]: E0910 05:19:17.733573 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.733813 kubelet[2726]: E0910 05:19:17.733800 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.733813 kubelet[2726]: W0910 05:19:17.733809 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.733903 kubelet[2726]: E0910 05:19:17.733818 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.733967 kubelet[2726]: E0910 05:19:17.733953 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.733997 kubelet[2726]: W0910 05:19:17.733967 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.733997 kubelet[2726]: E0910 05:19:17.733975 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.734125 kubelet[2726]: E0910 05:19:17.734116 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.734125 kubelet[2726]: W0910 05:19:17.734122 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.734165 kubelet[2726]: E0910 05:19:17.734130 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.734280 kubelet[2726]: E0910 05:19:17.734268 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.734280 kubelet[2726]: W0910 05:19:17.734278 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.734349 kubelet[2726]: E0910 05:19:17.734285 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.734464 kubelet[2726]: E0910 05:19:17.734432 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.734464 kubelet[2726]: W0910 05:19:17.734450 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.734464 kubelet[2726]: E0910 05:19:17.734459 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.734995 kubelet[2726]: E0910 05:19:17.734647 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.734995 kubelet[2726]: W0910 05:19:17.734655 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.734995 kubelet[2726]: E0910 05:19:17.734664 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.734995 kubelet[2726]: E0910 05:19:17.734844 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.734995 kubelet[2726]: W0910 05:19:17.734851 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.734995 kubelet[2726]: E0910 05:19:17.734859 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.735158 kubelet[2726]: E0910 05:19:17.735026 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.735158 kubelet[2726]: W0910 05:19:17.735043 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.735158 kubelet[2726]: E0910 05:19:17.735050 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.735254 kubelet[2726]: E0910 05:19:17.735239 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.735254 kubelet[2726]: W0910 05:19:17.735251 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.735297 kubelet[2726]: E0910 05:19:17.735262 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.735551 kubelet[2726]: E0910 05:19:17.735538 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.735551 kubelet[2726]: W0910 05:19:17.735548 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.735615 kubelet[2726]: E0910 05:19:17.735557 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.735775 kubelet[2726]: E0910 05:19:17.735751 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.735775 kubelet[2726]: W0910 05:19:17.735763 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.735775 kubelet[2726]: E0910 05:19:17.735772 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.735988 kubelet[2726]: E0910 05:19:17.735964 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.736018 kubelet[2726]: W0910 05:19:17.735988 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.736018 kubelet[2726]: E0910 05:19:17.735996 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.736260 kubelet[2726]: E0910 05:19:17.736231 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.736300 kubelet[2726]: W0910 05:19:17.736261 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.736326 kubelet[2726]: E0910 05:19:17.736298 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.736618 kubelet[2726]: E0910 05:19:17.736597 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.736618 kubelet[2726]: W0910 05:19:17.736613 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.736684 kubelet[2726]: E0910 05:19:17.736625 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.736809 kubelet[2726]: E0910 05:19:17.736796 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.736809 kubelet[2726]: W0910 05:19:17.736805 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.736889 kubelet[2726]: E0910 05:19:17.736813 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.737001 kubelet[2726]: E0910 05:19:17.736979 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.737001 kubelet[2726]: W0910 05:19:17.736988 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.737001 kubelet[2726]: E0910 05:19:17.736995 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.737193 kubelet[2726]: E0910 05:19:17.737182 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.737193 kubelet[2726]: W0910 05:19:17.737191 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.737254 kubelet[2726]: E0910 05:19:17.737201 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.737401 kubelet[2726]: E0910 05:19:17.737383 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.737401 kubelet[2726]: W0910 05:19:17.737398 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.737470 kubelet[2726]: E0910 05:19:17.737410 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.737637 kubelet[2726]: E0910 05:19:17.737624 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.737637 kubelet[2726]: W0910 05:19:17.737635 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.737697 kubelet[2726]: E0910 05:19:17.737646 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.737886 kubelet[2726]: E0910 05:19:17.737874 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.737886 kubelet[2726]: W0910 05:19:17.737882 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.737943 kubelet[2726]: E0910 05:19:17.737889 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.737943 kubelet[2726]: I0910 05:19:17.737914 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4e653afd-d957-40d1-839d-ca0bc8c42646-socket-dir\") pod \"csi-node-driver-6vhs9\" (UID: \"4e653afd-d957-40d1-839d-ca0bc8c42646\") " pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:17.738165 kubelet[2726]: E0910 05:19:17.738134 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.738165 kubelet[2726]: W0910 05:19:17.738153 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.738220 kubelet[2726]: E0910 05:19:17.738182 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.738220 kubelet[2726]: I0910 05:19:17.738213 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e653afd-d957-40d1-839d-ca0bc8c42646-kubelet-dir\") pod \"csi-node-driver-6vhs9\" (UID: \"4e653afd-d957-40d1-839d-ca0bc8c42646\") " pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:17.738449 kubelet[2726]: E0910 05:19:17.738430 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.738449 kubelet[2726]: W0910 05:19:17.738447 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.738544 kubelet[2726]: E0910 05:19:17.738463 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.738544 kubelet[2726]: I0910 05:19:17.738498 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4e653afd-d957-40d1-839d-ca0bc8c42646-registration-dir\") pod \"csi-node-driver-6vhs9\" (UID: \"4e653afd-d957-40d1-839d-ca0bc8c42646\") " pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:17.738743 kubelet[2726]: E0910 05:19:17.738724 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.738743 kubelet[2726]: W0910 05:19:17.738738 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.738808 kubelet[2726]: E0910 05:19:17.738765 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.738998 kubelet[2726]: E0910 05:19:17.738983 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.738998 kubelet[2726]: W0910 05:19:17.738995 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.739067 kubelet[2726]: E0910 05:19:17.739012 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.739191 kubelet[2726]: E0910 05:19:17.739179 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.739191 kubelet[2726]: W0910 05:19:17.739188 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.739338 kubelet[2726]: E0910 05:19:17.739200 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.739374 kubelet[2726]: E0910 05:19:17.739345 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.739374 kubelet[2726]: W0910 05:19:17.739352 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.739374 kubelet[2726]: E0910 05:19:17.739365 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.739586 kubelet[2726]: E0910 05:19:17.739561 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.739586 kubelet[2726]: W0910 05:19:17.739581 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.739660 kubelet[2726]: E0910 05:19:17.739597 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.739660 kubelet[2726]: I0910 05:19:17.739614 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xcz\" (UniqueName: \"kubernetes.io/projected/4e653afd-d957-40d1-839d-ca0bc8c42646-kube-api-access-95xcz\") pod \"csi-node-driver-6vhs9\" (UID: \"4e653afd-d957-40d1-839d-ca0bc8c42646\") " pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:17.739855 kubelet[2726]: E0910 05:19:17.739824 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.739855 kubelet[2726]: W0910 05:19:17.739837 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.740030 kubelet[2726]: E0910 05:19:17.739924 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.740030 kubelet[2726]: I0910 05:19:17.739944 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4e653afd-d957-40d1-839d-ca0bc8c42646-varrun\") pod \"csi-node-driver-6vhs9\" (UID: \"4e653afd-d957-40d1-839d-ca0bc8c42646\") " pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:17.740030 kubelet[2726]: E0910 05:19:17.740022 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.740030 kubelet[2726]: W0910 05:19:17.740030 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.740169 kubelet[2726]: E0910 05:19:17.740079 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.740236 kubelet[2726]: E0910 05:19:17.740219 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.740236 kubelet[2726]: W0910 05:19:17.740229 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.740291 kubelet[2726]: E0910 05:19:17.740241 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.740449 kubelet[2726]: E0910 05:19:17.740432 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.740472 kubelet[2726]: W0910 05:19:17.740455 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.740472 kubelet[2726]: E0910 05:19:17.740469 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.740649 kubelet[2726]: E0910 05:19:17.740632 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.740673 kubelet[2726]: W0910 05:19:17.740644 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.740673 kubelet[2726]: E0910 05:19:17.740663 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.740843 kubelet[2726]: E0910 05:19:17.740803 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.740843 kubelet[2726]: W0910 05:19:17.740822 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.740843 kubelet[2726]: E0910 05:19:17.740830 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.741009 kubelet[2726]: E0910 05:19:17.740992 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.741009 kubelet[2726]: W0910 05:19:17.741003 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.741077 kubelet[2726]: E0910 05:19:17.741012 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.819513 containerd[1583]: time="2025-09-10T05:19:17.819419134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-96fmq,Uid:55b9e5e9-cd06-4d96-b1e2-cab1a372996f,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:17.840681 kubelet[2726]: E0910 05:19:17.840622 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.840681 kubelet[2726]: W0910 05:19:17.840649 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.840681 kubelet[2726]: E0910 05:19:17.840675 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.841595 kubelet[2726]: E0910 05:19:17.841558 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.841595 kubelet[2726]: W0910 05:19:17.841574 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.842513 kubelet[2726]: E0910 05:19:17.841617 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.843743 kubelet[2726]: E0910 05:19:17.842968 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.843743 kubelet[2726]: W0910 05:19:17.842986 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.843743 kubelet[2726]: E0910 05:19:17.843027 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.843743 kubelet[2726]: E0910 05:19:17.843287 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.843743 kubelet[2726]: W0910 05:19:17.843297 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.843743 kubelet[2726]: E0910 05:19:17.843397 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.844202 kubelet[2726]: E0910 05:19:17.844090 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.844202 kubelet[2726]: W0910 05:19:17.844107 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.844915 kubelet[2726]: E0910 05:19:17.844619 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.846184 kubelet[2726]: E0910 05:19:17.846167 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.846184 kubelet[2726]: W0910 05:19:17.846183 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.846294 kubelet[2726]: E0910 05:19:17.846263 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.846514 kubelet[2726]: E0910 05:19:17.846401 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.846514 kubelet[2726]: W0910 05:19:17.846425 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.846514 kubelet[2726]: E0910 05:19:17.846455 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.846837 kubelet[2726]: E0910 05:19:17.846781 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.846837 kubelet[2726]: W0910 05:19:17.846792 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.846998 kubelet[2726]: E0910 05:19:17.846860 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.847212 kubelet[2726]: E0910 05:19:17.847163 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.847212 kubelet[2726]: W0910 05:19:17.847173 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.847212 kubelet[2726]: E0910 05:19:17.847188 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.848354 kubelet[2726]: E0910 05:19:17.848336 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.848476 kubelet[2726]: W0910 05:19:17.848420 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.848476 kubelet[2726]: E0910 05:19:17.848438 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.848883 kubelet[2726]: E0910 05:19:17.848826 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.848883 kubelet[2726]: W0910 05:19:17.848838 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.848963 kubelet[2726]: E0910 05:19:17.848884 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.849205 kubelet[2726]: E0910 05:19:17.849191 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.849380 kubelet[2726]: W0910 05:19:17.849275 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.849380 kubelet[2726]: E0910 05:19:17.849373 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.849459 containerd[1583]: time="2025-09-10T05:19:17.849343986Z" level=info msg="connecting to shim 55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777" address="unix:///run/containerd/s/7cbbae102601ca532cd597be346c529d7bfa7aabc8c54193d318ad2d9d981517" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:17.849972 kubelet[2726]: E0910 05:19:17.849830 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.849972 kubelet[2726]: W0910 05:19:17.849873 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.849972 kubelet[2726]: E0910 05:19:17.849949 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.851010 kubelet[2726]: E0910 05:19:17.850818 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.851010 kubelet[2726]: W0910 05:19:17.850833 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.851178 kubelet[2726]: E0910 05:19:17.851161 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.851380 kubelet[2726]: E0910 05:19:17.851367 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.851557 kubelet[2726]: W0910 05:19:17.851540 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.851704 kubelet[2726]: E0910 05:19:17.851689 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.851927 kubelet[2726]: E0910 05:19:17.851911 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.851927 kubelet[2726]: W0910 05:19:17.851926 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.852109 kubelet[2726]: E0910 05:19:17.852053 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.852177 kubelet[2726]: E0910 05:19:17.852163 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.852224 kubelet[2726]: W0910 05:19:17.852176 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.852357 kubelet[2726]: E0910 05:19:17.852237 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.852410 kubelet[2726]: E0910 05:19:17.852396 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.852441 kubelet[2726]: W0910 05:19:17.852409 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.852441 kubelet[2726]: E0910 05:19:17.852434 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.852711 kubelet[2726]: E0910 05:19:17.852696 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.852711 kubelet[2726]: W0910 05:19:17.852709 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.852863 kubelet[2726]: E0910 05:19:17.852749 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.853703 kubelet[2726]: E0910 05:19:17.853677 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.853703 kubelet[2726]: W0910 05:19:17.853697 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.853803 kubelet[2726]: E0910 05:19:17.853734 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.854021 kubelet[2726]: E0910 05:19:17.854003 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.854021 kubelet[2726]: W0910 05:19:17.854014 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.854113 kubelet[2726]: E0910 05:19:17.854069 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.854303 kubelet[2726]: E0910 05:19:17.854266 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.854303 kubelet[2726]: W0910 05:19:17.854282 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.854415 kubelet[2726]: E0910 05:19:17.854390 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.854621 kubelet[2726]: E0910 05:19:17.854603 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.854621 kubelet[2726]: W0910 05:19:17.854614 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.854710 kubelet[2726]: E0910 05:19:17.854629 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.854888 kubelet[2726]: E0910 05:19:17.854870 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.854888 kubelet[2726]: W0910 05:19:17.854881 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.854958 kubelet[2726]: E0910 05:19:17.854891 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.855516 kubelet[2726]: E0910 05:19:17.855334 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.855516 kubelet[2726]: W0910 05:19:17.855348 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.855516 kubelet[2726]: E0910 05:19:17.855384 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.866161 kubelet[2726]: E0910 05:19:17.866132 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:17.866313 kubelet[2726]: W0910 05:19:17.866298 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:17.866375 kubelet[2726]: E0910 05:19:17.866363 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:17.881617 systemd[1]: Started cri-containerd-55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777.scope - libcontainer container 55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777. Sep 10 05:19:17.912452 containerd[1583]: time="2025-09-10T05:19:17.912413865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-96fmq,Uid:55b9e5e9-cd06-4d96-b1e2-cab1a372996f,Namespace:calico-system,Attempt:0,} returns sandbox id \"55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777\"" Sep 10 05:19:20.045348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921444043.mount: Deactivated successfully. Sep 10 05:19:20.049867 kubelet[2726]: E0910 05:19:20.049776 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:21.739874 containerd[1583]: time="2025-09-10T05:19:21.739809798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:21.740564 containerd[1583]: time="2025-09-10T05:19:21.740534051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 10 05:19:21.741734 containerd[1583]: time="2025-09-10T05:19:21.741701998Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:21.743737 containerd[1583]: time="2025-09-10T05:19:21.743673818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:21.744185 containerd[1583]: time="2025-09-10T05:19:21.744137931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.166747617s" Sep 10 05:19:21.744185 containerd[1583]: time="2025-09-10T05:19:21.744180220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 10 05:19:21.745277 containerd[1583]: time="2025-09-10T05:19:21.745254442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 05:19:21.753515 containerd[1583]: time="2025-09-10T05:19:21.753162630Z" level=info msg="CreateContainer within sandbox \"38b57eb29c4e76c030ed62931867ce9a9d375035085310c2134d852d494d5388\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 05:19:21.763619 containerd[1583]: time="2025-09-10T05:19:21.763570471Z" level=info msg="Container b0e5c053bdd345c23c01edcf5eb1427bf4206904a0610a101d3e863bc75d3920: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:21.773812 containerd[1583]: time="2025-09-10T05:19:21.773720737Z" level=info msg="CreateContainer within sandbox \"38b57eb29c4e76c030ed62931867ce9a9d375035085310c2134d852d494d5388\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b0e5c053bdd345c23c01edcf5eb1427bf4206904a0610a101d3e863bc75d3920\"" Sep 10 05:19:21.775511 containerd[1583]: time="2025-09-10T05:19:21.774594791Z" level=info msg="StartContainer for \"b0e5c053bdd345c23c01edcf5eb1427bf4206904a0610a101d3e863bc75d3920\"" Sep 10 05:19:21.775684 containerd[1583]: time="2025-09-10T05:19:21.775638865Z" level=info msg="connecting to shim b0e5c053bdd345c23c01edcf5eb1427bf4206904a0610a101d3e863bc75d3920" address="unix:///run/containerd/s/7b5f76f4d3683c462483e76166e8c86d30c3e2c8b2662291049ddb1a524639d5" protocol=ttrpc version=3 Sep 10 05:19:21.804624 systemd[1]: Started cri-containerd-b0e5c053bdd345c23c01edcf5eb1427bf4206904a0610a101d3e863bc75d3920.scope - libcontainer container b0e5c053bdd345c23c01edcf5eb1427bf4206904a0610a101d3e863bc75d3920. Sep 10 05:19:21.898479 containerd[1583]: time="2025-09-10T05:19:21.898297020Z" level=info msg="StartContainer for \"b0e5c053bdd345c23c01edcf5eb1427bf4206904a0610a101d3e863bc75d3920\" returns successfully" Sep 10 05:19:22.049616 kubelet[2726]: E0910 05:19:22.049555 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:22.113095 kubelet[2726]: E0910 05:19:22.113046 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:22.171080 kubelet[2726]: E0910 05:19:22.171040 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.171080 kubelet[2726]: W0910 05:19:22.171066 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.171080 kubelet[2726]: E0910 05:19:22.171090 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.171333 kubelet[2726]: E0910 05:19:22.171318 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.171333 kubelet[2726]: W0910 05:19:22.171329 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.171382 kubelet[2726]: E0910 05:19:22.171339 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.171715 kubelet[2726]: E0910 05:19:22.171660 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.171715 kubelet[2726]: W0910 05:19:22.171688 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.171715 kubelet[2726]: E0910 05:19:22.171726 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.172041 kubelet[2726]: E0910 05:19:22.172019 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.172041 kubelet[2726]: W0910 05:19:22.172031 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.172041 kubelet[2726]: E0910 05:19:22.172042 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.172296 kubelet[2726]: E0910 05:19:22.172270 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.172296 kubelet[2726]: W0910 05:19:22.172286 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.172296 kubelet[2726]: E0910 05:19:22.172295 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.172600 kubelet[2726]: E0910 05:19:22.172546 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.172600 kubelet[2726]: W0910 05:19:22.172556 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.172600 kubelet[2726]: E0910 05:19:22.172572 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.172811 kubelet[2726]: E0910 05:19:22.172795 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.172811 kubelet[2726]: W0910 05:19:22.172806 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.172906 kubelet[2726]: E0910 05:19:22.172816 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.173017 kubelet[2726]: E0910 05:19:22.173001 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.173017 kubelet[2726]: W0910 05:19:22.173015 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.173081 kubelet[2726]: E0910 05:19:22.173029 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.173201 kubelet[2726]: E0910 05:19:22.173192 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.173234 kubelet[2726]: W0910 05:19:22.173201 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.173234 kubelet[2726]: E0910 05:19:22.173209 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.173370 kubelet[2726]: E0910 05:19:22.173352 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.173370 kubelet[2726]: W0910 05:19:22.173361 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.173370 kubelet[2726]: E0910 05:19:22.173369 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.173595 kubelet[2726]: E0910 05:19:22.173565 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.173595 kubelet[2726]: W0910 05:19:22.173578 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.173595 kubelet[2726]: E0910 05:19:22.173592 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.173817 kubelet[2726]: E0910 05:19:22.173787 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.173817 kubelet[2726]: W0910 05:19:22.173814 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.173868 kubelet[2726]: E0910 05:19:22.173824 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.174010 kubelet[2726]: E0910 05:19:22.173997 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.174010 kubelet[2726]: W0910 05:19:22.174006 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.174074 kubelet[2726]: E0910 05:19:22.174014 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.174197 kubelet[2726]: E0910 05:19:22.174182 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.174197 kubelet[2726]: W0910 05:19:22.174192 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.174197 kubelet[2726]: E0910 05:19:22.174199 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.174372 kubelet[2726]: E0910 05:19:22.174358 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.174372 kubelet[2726]: W0910 05:19:22.174367 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.174426 kubelet[2726]: E0910 05:19:22.174375 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.182898 kubelet[2726]: E0910 05:19:22.182863 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.182898 kubelet[2726]: W0910 05:19:22.182889 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.183036 kubelet[2726]: E0910 05:19:22.182915 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.183245 kubelet[2726]: E0910 05:19:22.183231 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.183245 kubelet[2726]: W0910 05:19:22.183241 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.183541 kubelet[2726]: E0910 05:19:22.183251 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.183541 kubelet[2726]: E0910 05:19:22.183449 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.183541 kubelet[2726]: W0910 05:19:22.183456 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.183541 kubelet[2726]: E0910 05:19:22.183464 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.183730 kubelet[2726]: E0910 05:19:22.183691 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.183730 kubelet[2726]: W0910 05:19:22.183699 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.183730 kubelet[2726]: E0910 05:19:22.183716 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.183915 kubelet[2726]: E0910 05:19:22.183900 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.183949 kubelet[2726]: W0910 05:19:22.183914 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.183949 kubelet[2726]: E0910 05:19:22.183935 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.184134 kubelet[2726]: E0910 05:19:22.184112 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.184134 kubelet[2726]: W0910 05:19:22.184126 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.184134 kubelet[2726]: E0910 05:19:22.184136 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.184348 kubelet[2726]: E0910 05:19:22.184332 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.184348 kubelet[2726]: W0910 05:19:22.184345 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.184418 kubelet[2726]: E0910 05:19:22.184380 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.184589 kubelet[2726]: E0910 05:19:22.184572 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.184643 kubelet[2726]: W0910 05:19:22.184586 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.184667 kubelet[2726]: E0910 05:19:22.184638 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.184846 kubelet[2726]: E0910 05:19:22.184825 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.184846 kubelet[2726]: W0910 05:19:22.184835 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.184895 kubelet[2726]: E0910 05:19:22.184851 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.185045 kubelet[2726]: E0910 05:19:22.185028 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.185045 kubelet[2726]: W0910 05:19:22.185040 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.185099 kubelet[2726]: E0910 05:19:22.185055 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.185235 kubelet[2726]: E0910 05:19:22.185221 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.185235 kubelet[2726]: W0910 05:19:22.185230 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.185282 kubelet[2726]: E0910 05:19:22.185243 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.185433 kubelet[2726]: E0910 05:19:22.185419 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.185433 kubelet[2726]: W0910 05:19:22.185429 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.185516 kubelet[2726]: E0910 05:19:22.185442 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.185646 kubelet[2726]: E0910 05:19:22.185633 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.185646 kubelet[2726]: W0910 05:19:22.185643 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.185706 kubelet[2726]: E0910 05:19:22.185655 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.185837 kubelet[2726]: E0910 05:19:22.185826 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.185837 kubelet[2726]: W0910 05:19:22.185834 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.185885 kubelet[2726]: E0910 05:19:22.185847 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.186033 kubelet[2726]: E0910 05:19:22.186021 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.186033 kubelet[2726]: W0910 05:19:22.186029 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.186075 kubelet[2726]: E0910 05:19:22.186041 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.186240 kubelet[2726]: E0910 05:19:22.186229 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.186240 kubelet[2726]: W0910 05:19:22.186236 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.186291 kubelet[2726]: E0910 05:19:22.186249 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.186506 kubelet[2726]: E0910 05:19:22.186474 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.186506 kubelet[2726]: W0910 05:19:22.186505 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.186556 kubelet[2726]: E0910 05:19:22.186514 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:22.186682 kubelet[2726]: E0910 05:19:22.186670 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:22.186682 kubelet[2726]: W0910 05:19:22.186679 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:22.186739 kubelet[2726]: E0910 05:19:22.186690 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.114694 kubelet[2726]: I0910 05:19:23.114662 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 05:19:23.115622 kubelet[2726]: E0910 05:19:23.115575 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:23.149519 containerd[1583]: time="2025-09-10T05:19:23.149450884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:23.150367 containerd[1583]: time="2025-09-10T05:19:23.150309829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 10 05:19:23.151455 containerd[1583]: time="2025-09-10T05:19:23.151424115Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:23.153699 containerd[1583]: time="2025-09-10T05:19:23.153651443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:23.154265 containerd[1583]: time="2025-09-10T05:19:23.154210064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.408928221s" Sep 10 05:19:23.154299 containerd[1583]: time="2025-09-10T05:19:23.154260068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 10 05:19:23.156641 containerd[1583]: time="2025-09-10T05:19:23.156615758Z" level=info msg="CreateContainer within sandbox \"55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 05:19:23.166167 containerd[1583]: time="2025-09-10T05:19:23.166117237Z" level=info msg="Container b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:23.175864 containerd[1583]: time="2025-09-10T05:19:23.175813442Z" level=info msg="CreateContainer within sandbox \"55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2\"" Sep 10 05:19:23.176337 containerd[1583]: time="2025-09-10T05:19:23.176307681Z" level=info msg="StartContainer for \"b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2\"" Sep 10 05:19:23.177658 containerd[1583]: time="2025-09-10T05:19:23.177635679Z" level=info msg="connecting to shim b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2" address="unix:///run/containerd/s/7cbbae102601ca532cd597be346c529d7bfa7aabc8c54193d318ad2d9d981517" protocol=ttrpc version=3 Sep 10 05:19:23.181516 kubelet[2726]: E0910 05:19:23.180835 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.181516 kubelet[2726]: W0910 05:19:23.180882 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.181516 kubelet[2726]: E0910 05:19:23.180907 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.181516 kubelet[2726]: E0910 05:19:23.181097 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.181516 kubelet[2726]: W0910 05:19:23.181130 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.181516 kubelet[2726]: E0910 05:19:23.181141 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.181516 kubelet[2726]: E0910 05:19:23.181318 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.181516 kubelet[2726]: W0910 05:19:23.181328 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.181516 kubelet[2726]: E0910 05:19:23.181338 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.181836 kubelet[2726]: E0910 05:19:23.181557 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.181836 kubelet[2726]: W0910 05:19:23.181591 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.181836 kubelet[2726]: E0910 05:19:23.181602 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.181836 kubelet[2726]: E0910 05:19:23.181784 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.181836 kubelet[2726]: W0910 05:19:23.181790 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.181836 kubelet[2726]: E0910 05:19:23.181798 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.182031 kubelet[2726]: E0910 05:19:23.182010 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.182031 kubelet[2726]: W0910 05:19:23.182023 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.182031 kubelet[2726]: E0910 05:19:23.182031 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.182207 kubelet[2726]: E0910 05:19:23.182186 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.182207 kubelet[2726]: W0910 05:19:23.182199 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.182283 kubelet[2726]: E0910 05:19:23.182206 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.182414 kubelet[2726]: E0910 05:19:23.182396 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.182414 kubelet[2726]: W0910 05:19:23.182405 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.182414 kubelet[2726]: E0910 05:19:23.182413 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.182620 kubelet[2726]: E0910 05:19:23.182605 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.182620 kubelet[2726]: W0910 05:19:23.182615 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.182703 kubelet[2726]: E0910 05:19:23.182623 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.182861 kubelet[2726]: E0910 05:19:23.182846 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.182861 kubelet[2726]: W0910 05:19:23.182856 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.182937 kubelet[2726]: E0910 05:19:23.182863 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.183052 kubelet[2726]: E0910 05:19:23.183028 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.183052 kubelet[2726]: W0910 05:19:23.183041 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.183052 kubelet[2726]: E0910 05:19:23.183049 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.183304 kubelet[2726]: E0910 05:19:23.183271 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.183304 kubelet[2726]: W0910 05:19:23.183283 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.183304 kubelet[2726]: E0910 05:19:23.183293 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.183549 kubelet[2726]: E0910 05:19:23.183528 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.183549 kubelet[2726]: W0910 05:19:23.183547 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.183640 kubelet[2726]: E0910 05:19:23.183560 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.183754 kubelet[2726]: E0910 05:19:23.183737 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.183754 kubelet[2726]: W0910 05:19:23.183750 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.183829 kubelet[2726]: E0910 05:19:23.183760 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.183987 kubelet[2726]: E0910 05:19:23.183968 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.183987 kubelet[2726]: W0910 05:19:23.183983 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.184085 kubelet[2726]: E0910 05:19:23.183993 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.190352 kubelet[2726]: E0910 05:19:23.190322 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.190352 kubelet[2726]: W0910 05:19:23.190336 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.190352 kubelet[2726]: E0910 05:19:23.190348 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.191126 kubelet[2726]: E0910 05:19:23.191105 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.191126 kubelet[2726]: W0910 05:19:23.191120 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.191217 kubelet[2726]: E0910 05:19:23.191138 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.191716 kubelet[2726]: E0910 05:19:23.191695 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.191716 kubelet[2726]: W0910 05:19:23.191708 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.191804 kubelet[2726]: E0910 05:19:23.191730 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.192057 kubelet[2726]: E0910 05:19:23.192021 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.192057 kubelet[2726]: W0910 05:19:23.192046 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.192148 kubelet[2726]: E0910 05:19:23.192127 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.192322 kubelet[2726]: E0910 05:19:23.192304 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.192322 kubelet[2726]: W0910 05:19:23.192315 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.192391 kubelet[2726]: E0910 05:19:23.192362 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.192563 kubelet[2726]: E0910 05:19:23.192545 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.192563 kubelet[2726]: W0910 05:19:23.192557 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.192644 kubelet[2726]: E0910 05:19:23.192574 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.192767 kubelet[2726]: E0910 05:19:23.192751 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.192767 kubelet[2726]: W0910 05:19:23.192761 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.192844 kubelet[2726]: E0910 05:19:23.192772 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.193016 kubelet[2726]: E0910 05:19:23.192989 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.193016 kubelet[2726]: W0910 05:19:23.193002 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.193016 kubelet[2726]: E0910 05:19:23.193018 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.193247 kubelet[2726]: E0910 05:19:23.193230 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.193247 kubelet[2726]: W0910 05:19:23.193241 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.193316 kubelet[2726]: E0910 05:19:23.193254 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.193830 kubelet[2726]: E0910 05:19:23.193800 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.193830 kubelet[2726]: W0910 05:19:23.193814 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.193914 kubelet[2726]: E0910 05:19:23.193870 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.194524 kubelet[2726]: E0910 05:19:23.194047 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.194524 kubelet[2726]: W0910 05:19:23.194059 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.194524 kubelet[2726]: E0910 05:19:23.194100 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.194524 kubelet[2726]: E0910 05:19:23.194239 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.194524 kubelet[2726]: W0910 05:19:23.194249 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.194524 kubelet[2726]: E0910 05:19:23.194265 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.194524 kubelet[2726]: E0910 05:19:23.194440 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.194524 kubelet[2726]: W0910 05:19:23.194446 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.194524 kubelet[2726]: E0910 05:19:23.194460 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.194819 kubelet[2726]: E0910 05:19:23.194653 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.194819 kubelet[2726]: W0910 05:19:23.194661 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.194819 kubelet[2726]: E0910 05:19:23.194673 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.195522 kubelet[2726]: E0910 05:19:23.194937 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.195522 kubelet[2726]: W0910 05:19:23.194963 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.195522 kubelet[2726]: E0910 05:19:23.194973 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.195522 kubelet[2726]: E0910 05:19:23.195183 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.195522 kubelet[2726]: W0910 05:19:23.195190 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.195522 kubelet[2726]: E0910 05:19:23.195205 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.195522 kubelet[2726]: E0910 05:19:23.195455 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.195522 kubelet[2726]: W0910 05:19:23.195463 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.195522 kubelet[2726]: E0910 05:19:23.195477 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.195817 kubelet[2726]: E0910 05:19:23.195665 2726 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 05:19:23.195817 kubelet[2726]: W0910 05:19:23.195671 2726 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 05:19:23.195817 kubelet[2726]: E0910 05:19:23.195679 2726 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 05:19:23.208720 systemd[1]: Started cri-containerd-b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2.scope - libcontainer container b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2. Sep 10 05:19:23.267723 systemd[1]: cri-containerd-b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2.scope: Deactivated successfully. Sep 10 05:19:23.269600 containerd[1583]: time="2025-09-10T05:19:23.269562880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2\" id:\"b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2\" pid:3464 exited_at:{seconds:1757481563 nanos:268973772}" Sep 10 05:19:23.307429 containerd[1583]: time="2025-09-10T05:19:23.307355494Z" level=info msg="received exit event container_id:\"b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2\" id:\"b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2\" pid:3464 exited_at:{seconds:1757481563 nanos:268973772}" Sep 10 05:19:23.318550 containerd[1583]: time="2025-09-10T05:19:23.318502858Z" level=info msg="StartContainer for \"b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2\" returns successfully" Sep 10 05:19:23.335776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b13594ebfccc537377d384d6fb0675e8543116ce6bb8a6789ca0bad3d8074bd2-rootfs.mount: Deactivated successfully. Sep 10 05:19:24.049233 kubelet[2726]: E0910 05:19:24.049174 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:24.120379 containerd[1583]: time="2025-09-10T05:19:24.120254337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 05:19:24.133838 kubelet[2726]: I0910 05:19:24.133765 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-774b4dff5-vqdvz" podStartSLOduration=3.965364314 podStartE2EDuration="8.133743341s" podCreationTimestamp="2025-09-10 05:19:16 +0000 UTC" firstStartedPulling="2025-09-10 05:19:17.576701748 +0000 UTC m=+18.645509790" lastFinishedPulling="2025-09-10 05:19:21.745080785 +0000 UTC m=+22.813888817" observedRunningTime="2025-09-10 05:19:22.125071366 +0000 UTC m=+23.193879418" watchObservedRunningTime="2025-09-10 05:19:24.133743341 +0000 UTC m=+25.202551383" Sep 10 05:19:26.049813 kubelet[2726]: E0910 05:19:26.049761 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:28.050060 kubelet[2726]: E0910 05:19:28.050004 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:30.049262 kubelet[2726]: E0910 05:19:30.049194 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:32.049199 kubelet[2726]: E0910 05:19:32.049149 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:32.218768 containerd[1583]: time="2025-09-10T05:19:32.218688126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:32.219656 containerd[1583]: time="2025-09-10T05:19:32.219616510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 10 05:19:32.220850 containerd[1583]: time="2025-09-10T05:19:32.220828577Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:32.222855 containerd[1583]: time="2025-09-10T05:19:32.222828994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:32.223298 containerd[1583]: time="2025-09-10T05:19:32.223258451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 8.102911741s" Sep 10 05:19:32.223298 containerd[1583]: time="2025-09-10T05:19:32.223291693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 10 05:19:32.224891 containerd[1583]: time="2025-09-10T05:19:32.224860189Z" level=info msg="CreateContainer within sandbox \"55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 05:19:32.236568 containerd[1583]: time="2025-09-10T05:19:32.236523477Z" level=info msg="Container a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:32.245233 containerd[1583]: time="2025-09-10T05:19:32.245188813Z" level=info msg="CreateContainer within sandbox \"55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048\"" Sep 10 05:19:32.245597 containerd[1583]: time="2025-09-10T05:19:32.245575269Z" level=info msg="StartContainer for \"a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048\"" Sep 10 05:19:32.247011 containerd[1583]: time="2025-09-10T05:19:32.246960491Z" level=info msg="connecting to shim a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048" address="unix:///run/containerd/s/7cbbae102601ca532cd597be346c529d7bfa7aabc8c54193d318ad2d9d981517" protocol=ttrpc version=3 Sep 10 05:19:32.276665 systemd[1]: Started cri-containerd-a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048.scope - libcontainer container a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048. Sep 10 05:19:32.320156 containerd[1583]: time="2025-09-10T05:19:32.320039145Z" level=info msg="StartContainer for \"a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048\" returns successfully" Sep 10 05:19:33.784998 containerd[1583]: time="2025-09-10T05:19:33.784939293Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 05:19:33.789314 containerd[1583]: time="2025-09-10T05:19:33.789262952Z" level=info msg="received exit event container_id:\"a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048\" id:\"a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048\" pid:3527 exited_at:{seconds:1757481573 nanos:789066293}" Sep 10 05:19:33.789466 containerd[1583]: time="2025-09-10T05:19:33.789342352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048\" id:\"a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048\" pid:3527 exited_at:{seconds:1757481573 nanos:789066293}" Sep 10 05:19:33.789327 systemd[1]: cri-containerd-a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048.scope: Deactivated successfully. Sep 10 05:19:33.789672 systemd[1]: cri-containerd-a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048.scope: Consumed 591ms CPU time, 176.9M memory peak, 3.4M read from disk, 171.3M written to disk. Sep 10 05:19:33.813472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8c177b10d15de454080a40d11bea6a0ecb680f11fcdd169fb52562c2e239048-rootfs.mount: Deactivated successfully. Sep 10 05:19:33.859737 kubelet[2726]: I0910 05:19:33.859677 2726 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 05:19:34.054383 systemd[1]: Created slice kubepods-besteffort-pod4e653afd_d957_40d1_839d_ca0bc8c42646.slice - libcontainer container kubepods-besteffort-pod4e653afd_d957_40d1_839d_ca0bc8c42646.slice. Sep 10 05:19:34.059474 containerd[1583]: time="2025-09-10T05:19:34.059418087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vhs9,Uid:4e653afd-d957-40d1-839d-ca0bc8c42646,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:34.132205 systemd[1]: Created slice kubepods-burstable-pod1e701f52_35b0_48d7_9e5d_b37cc64e28ac.slice - libcontainer container kubepods-burstable-pod1e701f52_35b0_48d7_9e5d_b37cc64e28ac.slice. Sep 10 05:19:34.138549 systemd[1]: Created slice kubepods-besteffort-podd4c0eba5_dbb0_4862_b63c_73f11b54cf28.slice - libcontainer container kubepods-besteffort-podd4c0eba5_dbb0_4862_b63c_73f11b54cf28.slice. Sep 10 05:19:34.144147 systemd[1]: Created slice kubepods-besteffort-podc18d3146_223b_4b5c_9fad_603cd0d8e559.slice - libcontainer container kubepods-besteffort-podc18d3146_223b_4b5c_9fad_603cd0d8e559.slice. Sep 10 05:19:34.149129 systemd[1]: Created slice kubepods-burstable-pod059478a1_bd0b_4735_ac44_be87620e3fa4.slice - libcontainer container kubepods-burstable-pod059478a1_bd0b_4735_ac44_be87620e3fa4.slice. Sep 10 05:19:34.154590 systemd[1]: Created slice kubepods-besteffort-pod04d2b8bc_38c8_44b4_929b_263f46e6af1a.slice - libcontainer container kubepods-besteffort-pod04d2b8bc_38c8_44b4_929b_263f46e6af1a.slice. Sep 10 05:19:34.159670 systemd[1]: Created slice kubepods-besteffort-podd525cd80_8783_41ea_936c_c0e9bf80d085.slice - libcontainer container kubepods-besteffort-podd525cd80_8783_41ea_936c_c0e9bf80d085.slice. Sep 10 05:19:34.161146 kubelet[2726]: I0910 05:19:34.161102 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtk4p\" (UniqueName: \"kubernetes.io/projected/c18d3146-223b-4b5c-9fad-603cd0d8e559-kube-api-access-wtk4p\") pod \"calico-apiserver-5b879dbc57-7t9v5\" (UID: \"c18d3146-223b-4b5c-9fad-603cd0d8e559\") " pod="calico-apiserver/calico-apiserver-5b879dbc57-7t9v5" Sep 10 05:19:34.161146 kubelet[2726]: I0910 05:19:34.161136 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktj5k\" (UniqueName: \"kubernetes.io/projected/1e701f52-35b0-48d7-9e5d-b37cc64e28ac-kube-api-access-ktj5k\") pod \"coredns-7c65d6cfc9-ntfnx\" (UID: \"1e701f52-35b0-48d7-9e5d-b37cc64e28ac\") " pod="kube-system/coredns-7c65d6cfc9-ntfnx" Sep 10 05:19:34.161146 kubelet[2726]: I0910 05:19:34.161152 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/04d2b8bc-38c8-44b4-929b-263f46e6af1a-calico-apiserver-certs\") pod \"calico-apiserver-5b879dbc57-7zs56\" (UID: \"04d2b8bc-38c8-44b4-929b-263f46e6af1a\") " pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" Sep 10 05:19:34.161299 kubelet[2726]: I0910 05:19:34.161168 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2lvn\" (UniqueName: \"kubernetes.io/projected/d4c0eba5-dbb0-4862-b63c-73f11b54cf28-kube-api-access-x2lvn\") pod \"calico-kube-controllers-8674dc7db6-njx5w\" (UID: \"d4c0eba5-dbb0-4862-b63c-73f11b54cf28\") " pod="calico-system/calico-kube-controllers-8674dc7db6-njx5w" Sep 10 05:19:34.161299 kubelet[2726]: I0910 05:19:34.161202 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmz5\" (UniqueName: \"kubernetes.io/projected/04d2b8bc-38c8-44b4-929b-263f46e6af1a-kube-api-access-xhmz5\") pod \"calico-apiserver-5b879dbc57-7zs56\" (UID: \"04d2b8bc-38c8-44b4-929b-263f46e6af1a\") " pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" Sep 10 05:19:34.161299 kubelet[2726]: I0910 05:19:34.161216 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b113321a-1b4c-49e8-b708-91fe4d366899-goldmane-key-pair\") pod \"goldmane-7988f88666-262kf\" (UID: \"b113321a-1b4c-49e8-b708-91fe4d366899\") " pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:34.161299 kubelet[2726]: I0910 05:19:34.161231 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntfq\" (UniqueName: \"kubernetes.io/projected/b113321a-1b4c-49e8-b708-91fe4d366899-kube-api-access-kntfq\") pod \"goldmane-7988f88666-262kf\" (UID: \"b113321a-1b4c-49e8-b708-91fe4d366899\") " pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:34.161299 kubelet[2726]: I0910 05:19:34.161287 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c18d3146-223b-4b5c-9fad-603cd0d8e559-calico-apiserver-certs\") pod \"calico-apiserver-5b879dbc57-7t9v5\" (UID: \"c18d3146-223b-4b5c-9fad-603cd0d8e559\") " pod="calico-apiserver/calico-apiserver-5b879dbc57-7t9v5" Sep 10 05:19:34.161520 kubelet[2726]: I0910 05:19:34.161501 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b113321a-1b4c-49e8-b708-91fe4d366899-goldmane-ca-bundle\") pod \"goldmane-7988f88666-262kf\" (UID: \"b113321a-1b4c-49e8-b708-91fe4d366899\") " pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:34.161569 kubelet[2726]: I0910 05:19:34.161560 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/059478a1-bd0b-4735-ac44-be87620e3fa4-config-volume\") pod \"coredns-7c65d6cfc9-g798j\" (UID: \"059478a1-bd0b-4735-ac44-be87620e3fa4\") " pod="kube-system/coredns-7c65d6cfc9-g798j" Sep 10 05:19:34.161605 kubelet[2726]: I0910 05:19:34.161578 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-ca-bundle\") pod \"whisker-678897967b-ffmqs\" (UID: \"d525cd80-8783-41ea-936c-c0e9bf80d085\") " pod="calico-system/whisker-678897967b-ffmqs" Sep 10 05:19:34.161705 kubelet[2726]: I0910 05:19:34.161664 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b113321a-1b4c-49e8-b708-91fe4d366899-config\") pod \"goldmane-7988f88666-262kf\" (UID: \"b113321a-1b4c-49e8-b708-91fe4d366899\") " pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:34.161705 kubelet[2726]: I0910 05:19:34.161698 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ls4\" (UniqueName: \"kubernetes.io/projected/059478a1-bd0b-4735-ac44-be87620e3fa4-kube-api-access-r4ls4\") pod \"coredns-7c65d6cfc9-g798j\" (UID: \"059478a1-bd0b-4735-ac44-be87620e3fa4\") " pod="kube-system/coredns-7c65d6cfc9-g798j" Sep 10 05:19:34.161779 kubelet[2726]: I0910 05:19:34.161730 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpmf\" (UniqueName: \"kubernetes.io/projected/d525cd80-8783-41ea-936c-c0e9bf80d085-kube-api-access-vwpmf\") pod \"whisker-678897967b-ffmqs\" (UID: \"d525cd80-8783-41ea-936c-c0e9bf80d085\") " pod="calico-system/whisker-678897967b-ffmqs" Sep 10 05:19:34.161779 kubelet[2726]: I0910 05:19:34.161752 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c0eba5-dbb0-4862-b63c-73f11b54cf28-tigera-ca-bundle\") pod \"calico-kube-controllers-8674dc7db6-njx5w\" (UID: \"d4c0eba5-dbb0-4862-b63c-73f11b54cf28\") " pod="calico-system/calico-kube-controllers-8674dc7db6-njx5w" Sep 10 05:19:34.161779 kubelet[2726]: I0910 05:19:34.161771 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e701f52-35b0-48d7-9e5d-b37cc64e28ac-config-volume\") pod \"coredns-7c65d6cfc9-ntfnx\" (UID: \"1e701f52-35b0-48d7-9e5d-b37cc64e28ac\") " pod="kube-system/coredns-7c65d6cfc9-ntfnx" Sep 10 05:19:34.161878 kubelet[2726]: I0910 05:19:34.161786 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-backend-key-pair\") pod \"whisker-678897967b-ffmqs\" (UID: \"d525cd80-8783-41ea-936c-c0e9bf80d085\") " pod="calico-system/whisker-678897967b-ffmqs" Sep 10 05:19:34.165287 systemd[1]: Created slice kubepods-besteffort-podb113321a_1b4c_49e8_b708_91fe4d366899.slice - libcontainer container kubepods-besteffort-podb113321a_1b4c_49e8_b708_91fe4d366899.slice. Sep 10 05:19:34.360679 containerd[1583]: time="2025-09-10T05:19:34.360535650Z" level=error msg="Failed to destroy network for sandbox \"206f62a38717ce98d41af964678186ed3df9eb14a1e0de8e6cc0f4d8bd8413f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.362608 containerd[1583]: time="2025-09-10T05:19:34.362559491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vhs9,Uid:4e653afd-d957-40d1-839d-ca0bc8c42646,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"206f62a38717ce98d41af964678186ed3df9eb14a1e0de8e6cc0f4d8bd8413f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.363663 kubelet[2726]: E0910 05:19:34.363606 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206f62a38717ce98d41af964678186ed3df9eb14a1e0de8e6cc0f4d8bd8413f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.363740 kubelet[2726]: E0910 05:19:34.363696 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206f62a38717ce98d41af964678186ed3df9eb14a1e0de8e6cc0f4d8bd8413f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:34.363740 kubelet[2726]: E0910 05:19:34.363719 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206f62a38717ce98d41af964678186ed3df9eb14a1e0de8e6cc0f4d8bd8413f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:34.363796 kubelet[2726]: E0910 05:19:34.363764 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6vhs9_calico-system(4e653afd-d957-40d1-839d-ca0bc8c42646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6vhs9_calico-system(4e653afd-d957-40d1-839d-ca0bc8c42646)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"206f62a38717ce98d41af964678186ed3df9eb14a1e0de8e6cc0f4d8bd8413f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:34.436579 kubelet[2726]: E0910 05:19:34.436512 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:34.437425 containerd[1583]: time="2025-09-10T05:19:34.437377246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntfnx,Uid:1e701f52-35b0-48d7-9e5d-b37cc64e28ac,Namespace:kube-system,Attempt:0,}" Sep 10 05:19:34.443286 containerd[1583]: time="2025-09-10T05:19:34.442438440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8674dc7db6-njx5w,Uid:d4c0eba5-dbb0-4862-b63c-73f11b54cf28,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:34.447930 containerd[1583]: time="2025-09-10T05:19:34.447879608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7t9v5,Uid:c18d3146-223b-4b5c-9fad-603cd0d8e559,Namespace:calico-apiserver,Attempt:0,}" Sep 10 05:19:34.452117 kubelet[2726]: E0910 05:19:34.452083 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:34.453327 containerd[1583]: time="2025-09-10T05:19:34.453263789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g798j,Uid:059478a1-bd0b-4735-ac44-be87620e3fa4,Namespace:kube-system,Attempt:0,}" Sep 10 05:19:34.467151 containerd[1583]: time="2025-09-10T05:19:34.467098651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678897967b-ffmqs,Uid:d525cd80-8783-41ea-936c-c0e9bf80d085,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:34.468758 containerd[1583]: time="2025-09-10T05:19:34.468542873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-262kf,Uid:b113321a-1b4c-49e8-b708-91fe4d366899,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:34.468758 containerd[1583]: time="2025-09-10T05:19:34.467700762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7zs56,Uid:04d2b8bc-38c8-44b4-929b-263f46e6af1a,Namespace:calico-apiserver,Attempt:0,}" Sep 10 05:19:34.537044 containerd[1583]: time="2025-09-10T05:19:34.536962561Z" level=error msg="Failed to destroy network for sandbox \"4bb18297aaf37633014da545a16bdf0f6ac63f651d1f4c6eec3c8c65c16ce254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.539511 containerd[1583]: time="2025-09-10T05:19:34.538440356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntfnx,Uid:1e701f52-35b0-48d7-9e5d-b37cc64e28ac,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb18297aaf37633014da545a16bdf0f6ac63f651d1f4c6eec3c8c65c16ce254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.539627 kubelet[2726]: E0910 05:19:34.538744 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb18297aaf37633014da545a16bdf0f6ac63f651d1f4c6eec3c8c65c16ce254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.539627 kubelet[2726]: E0910 05:19:34.538806 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb18297aaf37633014da545a16bdf0f6ac63f651d1f4c6eec3c8c65c16ce254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ntfnx" Sep 10 05:19:34.539627 kubelet[2726]: E0910 05:19:34.538826 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb18297aaf37633014da545a16bdf0f6ac63f651d1f4c6eec3c8c65c16ce254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ntfnx" Sep 10 05:19:34.539851 kubelet[2726]: E0910 05:19:34.538865 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ntfnx_kube-system(1e701f52-35b0-48d7-9e5d-b37cc64e28ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ntfnx_kube-system(1e701f52-35b0-48d7-9e5d-b37cc64e28ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bb18297aaf37633014da545a16bdf0f6ac63f651d1f4c6eec3c8c65c16ce254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ntfnx" podUID="1e701f52-35b0-48d7-9e5d-b37cc64e28ac" Sep 10 05:19:34.542911 containerd[1583]: time="2025-09-10T05:19:34.542466408Z" level=error msg="Failed to destroy network for sandbox \"c276805ee41da2bef4a155885ce091e1b3c80b377c79c78d66b89c1477549aa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.545728 containerd[1583]: time="2025-09-10T05:19:34.545674161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7t9v5,Uid:c18d3146-223b-4b5c-9fad-603cd0d8e559,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c276805ee41da2bef4a155885ce091e1b3c80b377c79c78d66b89c1477549aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.546122 kubelet[2726]: E0910 05:19:34.546072 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c276805ee41da2bef4a155885ce091e1b3c80b377c79c78d66b89c1477549aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.546188 kubelet[2726]: E0910 05:19:34.546162 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c276805ee41da2bef4a155885ce091e1b3c80b377c79c78d66b89c1477549aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b879dbc57-7t9v5" Sep 10 05:19:34.546269 kubelet[2726]: E0910 05:19:34.546186 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c276805ee41da2bef4a155885ce091e1b3c80b377c79c78d66b89c1477549aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b879dbc57-7t9v5" Sep 10 05:19:34.546315 kubelet[2726]: E0910 05:19:34.546257 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b879dbc57-7t9v5_calico-apiserver(c18d3146-223b-4b5c-9fad-603cd0d8e559)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b879dbc57-7t9v5_calico-apiserver(c18d3146-223b-4b5c-9fad-603cd0d8e559)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c276805ee41da2bef4a155885ce091e1b3c80b377c79c78d66b89c1477549aa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b879dbc57-7t9v5" podUID="c18d3146-223b-4b5c-9fad-603cd0d8e559" Sep 10 05:19:34.554328 containerd[1583]: time="2025-09-10T05:19:34.554278890Z" level=error msg="Failed to destroy network for sandbox \"4f848f2f4959ccecd79c8a6de90ddd20255794fc3b142d09fd0e014e7be90e97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.556045 containerd[1583]: time="2025-09-10T05:19:34.555563513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8674dc7db6-njx5w,Uid:d4c0eba5-dbb0-4862-b63c-73f11b54cf28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f848f2f4959ccecd79c8a6de90ddd20255794fc3b142d09fd0e014e7be90e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.556141 kubelet[2726]: E0910 05:19:34.555790 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f848f2f4959ccecd79c8a6de90ddd20255794fc3b142d09fd0e014e7be90e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.556141 kubelet[2726]: E0910 05:19:34.555882 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f848f2f4959ccecd79c8a6de90ddd20255794fc3b142d09fd0e014e7be90e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8674dc7db6-njx5w" Sep 10 05:19:34.556141 kubelet[2726]: E0910 05:19:34.555904 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f848f2f4959ccecd79c8a6de90ddd20255794fc3b142d09fd0e014e7be90e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8674dc7db6-njx5w" Sep 10 05:19:34.556235 kubelet[2726]: E0910 05:19:34.555951 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8674dc7db6-njx5w_calico-system(d4c0eba5-dbb0-4862-b63c-73f11b54cf28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8674dc7db6-njx5w_calico-system(d4c0eba5-dbb0-4862-b63c-73f11b54cf28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f848f2f4959ccecd79c8a6de90ddd20255794fc3b142d09fd0e014e7be90e97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8674dc7db6-njx5w" podUID="d4c0eba5-dbb0-4862-b63c-73f11b54cf28" Sep 10 05:19:34.577677 containerd[1583]: time="2025-09-10T05:19:34.577595769Z" level=error msg="Failed to destroy network for sandbox \"8eb84cf789e3242b819a78aca64379bc2ebfe2dd01752e689bf27af07910fa68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.578897 containerd[1583]: time="2025-09-10T05:19:34.578871464Z" level=error msg="Failed to destroy network for sandbox \"7d72ab177c98012d3d6fb6b805695a26b35815785f453f8e0204af63033e48a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.579737 containerd[1583]: time="2025-09-10T05:19:34.579694790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g798j,Uid:059478a1-bd0b-4735-ac44-be87620e3fa4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb84cf789e3242b819a78aca64379bc2ebfe2dd01752e689bf27af07910fa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.580352 kubelet[2726]: E0910 05:19:34.580299 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb84cf789e3242b819a78aca64379bc2ebfe2dd01752e689bf27af07910fa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.580436 kubelet[2726]: E0910 05:19:34.580366 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb84cf789e3242b819a78aca64379bc2ebfe2dd01752e689bf27af07910fa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g798j" Sep 10 05:19:34.580436 kubelet[2726]: E0910 05:19:34.580386 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb84cf789e3242b819a78aca64379bc2ebfe2dd01752e689bf27af07910fa68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g798j" Sep 10 05:19:34.580560 kubelet[2726]: E0910 05:19:34.580434 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-g798j_kube-system(059478a1-bd0b-4735-ac44-be87620e3fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-g798j_kube-system(059478a1-bd0b-4735-ac44-be87620e3fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eb84cf789e3242b819a78aca64379bc2ebfe2dd01752e689bf27af07910fa68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g798j" podUID="059478a1-bd0b-4735-ac44-be87620e3fa4" Sep 10 05:19:34.582030 containerd[1583]: time="2025-09-10T05:19:34.581991884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678897967b-ffmqs,Uid:d525cd80-8783-41ea-936c-c0e9bf80d085,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d72ab177c98012d3d6fb6b805695a26b35815785f453f8e0204af63033e48a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.582427 kubelet[2726]: E0910 05:19:34.582296 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d72ab177c98012d3d6fb6b805695a26b35815785f453f8e0204af63033e48a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.582427 kubelet[2726]: E0910 05:19:34.582329 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d72ab177c98012d3d6fb6b805695a26b35815785f453f8e0204af63033e48a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-678897967b-ffmqs" Sep 10 05:19:34.582427 kubelet[2726]: E0910 05:19:34.582344 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d72ab177c98012d3d6fb6b805695a26b35815785f453f8e0204af63033e48a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-678897967b-ffmqs" Sep 10 05:19:34.582548 kubelet[2726]: E0910 05:19:34.582381 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-678897967b-ffmqs_calico-system(d525cd80-8783-41ea-936c-c0e9bf80d085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-678897967b-ffmqs_calico-system(d525cd80-8783-41ea-936c-c0e9bf80d085)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d72ab177c98012d3d6fb6b805695a26b35815785f453f8e0204af63033e48a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-678897967b-ffmqs" podUID="d525cd80-8783-41ea-936c-c0e9bf80d085" Sep 10 05:19:34.587732 containerd[1583]: time="2025-09-10T05:19:34.587642907Z" level=error msg="Failed to destroy network for sandbox \"7650cdb2c7591b85903b71e03c09df26f0a6413f35a6615645b2c9701ba784ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.593582 containerd[1583]: time="2025-09-10T05:19:34.593544038Z" level=error msg="Failed to destroy network for sandbox \"d2f77f336b7a3e77ecf5f179ae6978c4f1e19991c073c4a81914b60ba85b7c50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.610278 containerd[1583]: time="2025-09-10T05:19:34.610210668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-262kf,Uid:b113321a-1b4c-49e8-b708-91fe4d366899,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7650cdb2c7591b85903b71e03c09df26f0a6413f35a6615645b2c9701ba784ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.610525 kubelet[2726]: E0910 05:19:34.610469 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7650cdb2c7591b85903b71e03c09df26f0a6413f35a6615645b2c9701ba784ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.610638 kubelet[2726]: E0910 05:19:34.610554 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7650cdb2c7591b85903b71e03c09df26f0a6413f35a6615645b2c9701ba784ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:34.610638 kubelet[2726]: E0910 05:19:34.610575 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7650cdb2c7591b85903b71e03c09df26f0a6413f35a6615645b2c9701ba784ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:34.610638 kubelet[2726]: E0910 05:19:34.610620 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-262kf_calico-system(b113321a-1b4c-49e8-b708-91fe4d366899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-262kf_calico-system(b113321a-1b4c-49e8-b708-91fe4d366899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7650cdb2c7591b85903b71e03c09df26f0a6413f35a6615645b2c9701ba784ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-262kf" podUID="b113321a-1b4c-49e8-b708-91fe4d366899" Sep 10 05:19:34.629092 containerd[1583]: time="2025-09-10T05:19:34.628979334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7zs56,Uid:04d2b8bc-38c8-44b4-929b-263f46e6af1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2f77f336b7a3e77ecf5f179ae6978c4f1e19991c073c4a81914b60ba85b7c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.629210 kubelet[2726]: E0910 05:19:34.629116 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2f77f336b7a3e77ecf5f179ae6978c4f1e19991c073c4a81914b60ba85b7c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:34.629210 kubelet[2726]: E0910 05:19:34.629144 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2f77f336b7a3e77ecf5f179ae6978c4f1e19991c073c4a81914b60ba85b7c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" Sep 10 05:19:34.629210 kubelet[2726]: E0910 05:19:34.629169 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2f77f336b7a3e77ecf5f179ae6978c4f1e19991c073c4a81914b60ba85b7c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" Sep 10 05:19:34.629300 kubelet[2726]: E0910 05:19:34.629192 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b879dbc57-7zs56_calico-apiserver(04d2b8bc-38c8-44b4-929b-263f46e6af1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b879dbc57-7zs56_calico-apiserver(04d2b8bc-38c8-44b4-929b-263f46e6af1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2f77f336b7a3e77ecf5f179ae6978c4f1e19991c073c4a81914b60ba85b7c50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" podUID="04d2b8bc-38c8-44b4-929b-263f46e6af1a" Sep 10 05:19:34.822329 systemd[1]: run-netns-cni\x2d270b1fc2\x2d64af\x2d9a04\x2d8fca\x2d550ec30302ae.mount: Deactivated successfully. Sep 10 05:19:35.144875 containerd[1583]: time="2025-09-10T05:19:35.144829612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 05:19:39.331653 kubelet[2726]: I0910 05:19:39.331201 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 05:19:39.333779 kubelet[2726]: E0910 05:19:39.333749 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:39.916877 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:60188.service - OpenSSH per-connection server daemon (10.0.0.1:60188). Sep 10 05:19:40.014246 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 60188 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:19:40.016465 sshd-session[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:19:40.021310 systemd-logind[1563]: New session 8 of user core. Sep 10 05:19:40.034672 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 05:19:40.153242 kubelet[2726]: E0910 05:19:40.153190 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:40.173626 sshd[3850]: Connection closed by 10.0.0.1 port 60188 Sep 10 05:19:40.173918 sshd-session[3847]: pam_unix(sshd:session): session closed for user core Sep 10 05:19:40.177570 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:60188.service: Deactivated successfully. Sep 10 05:19:40.179415 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 05:19:40.180201 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Sep 10 05:19:40.181256 systemd-logind[1563]: Removed session 8. Sep 10 05:19:45.051502 kubelet[2726]: E0910 05:19:45.050609 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:45.053554 containerd[1583]: time="2025-09-10T05:19:45.053352992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g798j,Uid:059478a1-bd0b-4735-ac44-be87620e3fa4,Namespace:kube-system,Attempt:0,}" Sep 10 05:19:45.053862 containerd[1583]: time="2025-09-10T05:19:45.053694382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vhs9,Uid:4e653afd-d957-40d1-839d-ca0bc8c42646,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:45.189678 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:60192.service - OpenSSH per-connection server daemon (10.0.0.1:60192). Sep 10 05:19:45.577438 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 60192 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:19:45.579646 sshd-session[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:19:45.591185 systemd-logind[1563]: New session 9 of user core. Sep 10 05:19:45.597145 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 05:19:45.848600 containerd[1583]: time="2025-09-10T05:19:45.847512346Z" level=error msg="Failed to destroy network for sandbox \"959ec4ac64251512c8abbf971f5e5e256e213ab32b6c9ef42274b4a52d3de1c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:45.851314 systemd[1]: run-netns-cni\x2d2b27b673\x2d302b\x2d7c8d\x2d2117\x2d6dc71d27d961.mount: Deactivated successfully. Sep 10 05:19:45.854030 sshd[3871]: Connection closed by 10.0.0.1 port 60192 Sep 10 05:19:45.855017 sshd-session[3868]: pam_unix(sshd:session): session closed for user core Sep 10 05:19:45.859915 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Sep 10 05:19:45.860549 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:60192.service: Deactivated successfully. Sep 10 05:19:45.863871 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 05:19:45.867572 systemd-logind[1563]: Removed session 9. Sep 10 05:19:45.889510 containerd[1583]: time="2025-09-10T05:19:45.889437123Z" level=error msg="Failed to destroy network for sandbox \"c8921faf438ff7657413cc1e64d94bffcb0939b6c67766b847d787ab8b6ddc4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:45.892176 systemd[1]: run-netns-cni\x2d074baf96\x2d221f\x2d99fa\x2d0db3\x2d671173347398.mount: Deactivated successfully. Sep 10 05:19:46.059050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246463694.mount: Deactivated successfully. Sep 10 05:19:46.872154 containerd[1583]: time="2025-09-10T05:19:46.872028196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vhs9,Uid:4e653afd-d957-40d1-839d-ca0bc8c42646,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"959ec4ac64251512c8abbf971f5e5e256e213ab32b6c9ef42274b4a52d3de1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:46.872870 kubelet[2726]: E0910 05:19:46.872389 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959ec4ac64251512c8abbf971f5e5e256e213ab32b6c9ef42274b4a52d3de1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:46.872870 kubelet[2726]: E0910 05:19:46.872468 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959ec4ac64251512c8abbf971f5e5e256e213ab32b6c9ef42274b4a52d3de1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:46.872870 kubelet[2726]: E0910 05:19:46.872525 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959ec4ac64251512c8abbf971f5e5e256e213ab32b6c9ef42274b4a52d3de1c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6vhs9" Sep 10 05:19:46.874294 kubelet[2726]: E0910 05:19:46.874205 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6vhs9_calico-system(4e653afd-d957-40d1-839d-ca0bc8c42646)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6vhs9_calico-system(4e653afd-d957-40d1-839d-ca0bc8c42646)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"959ec4ac64251512c8abbf971f5e5e256e213ab32b6c9ef42274b4a52d3de1c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6vhs9" podUID="4e653afd-d957-40d1-839d-ca0bc8c42646" Sep 10 05:19:47.021313 containerd[1583]: time="2025-09-10T05:19:47.021224208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g798j,Uid:059478a1-bd0b-4735-ac44-be87620e3fa4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8921faf438ff7657413cc1e64d94bffcb0939b6c67766b847d787ab8b6ddc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.021618 kubelet[2726]: E0910 05:19:47.021566 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8921faf438ff7657413cc1e64d94bffcb0939b6c67766b847d787ab8b6ddc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.021685 kubelet[2726]: E0910 05:19:47.021647 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8921faf438ff7657413cc1e64d94bffcb0939b6c67766b847d787ab8b6ddc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g798j" Sep 10 05:19:47.021685 kubelet[2726]: E0910 05:19:47.021670 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8921faf438ff7657413cc1e64d94bffcb0939b6c67766b847d787ab8b6ddc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g798j" Sep 10 05:19:47.021746 kubelet[2726]: E0910 05:19:47.021720 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-g798j_kube-system(059478a1-bd0b-4735-ac44-be87620e3fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-g798j_kube-system(059478a1-bd0b-4735-ac44-be87620e3fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8921faf438ff7657413cc1e64d94bffcb0939b6c67766b847d787ab8b6ddc4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g798j" podUID="059478a1-bd0b-4735-ac44-be87620e3fa4" Sep 10 05:19:47.050109 containerd[1583]: time="2025-09-10T05:19:47.050049308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-262kf,Uid:b113321a-1b4c-49e8-b708-91fe4d366899,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:47.050457 containerd[1583]: time="2025-09-10T05:19:47.050432026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7zs56,Uid:04d2b8bc-38c8-44b4-929b-263f46e6af1a,Namespace:calico-apiserver,Attempt:0,}" Sep 10 05:19:47.314258 containerd[1583]: time="2025-09-10T05:19:47.314181576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:47.315560 containerd[1583]: time="2025-09-10T05:19:47.315529827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 10 05:19:47.316361 containerd[1583]: time="2025-09-10T05:19:47.316297447Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:47.318419 containerd[1583]: time="2025-09-10T05:19:47.318391767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:47.319216 containerd[1583]: time="2025-09-10T05:19:47.319155230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 12.174289159s" Sep 10 05:19:47.319216 containerd[1583]: time="2025-09-10T05:19:47.319186037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 10 05:19:47.336987 containerd[1583]: time="2025-09-10T05:19:47.336945302Z" level=info msg="CreateContainer within sandbox \"55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 05:19:47.358767 containerd[1583]: time="2025-09-10T05:19:47.357899893Z" level=info msg="Container 9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:47.372638 containerd[1583]: time="2025-09-10T05:19:47.372588766Z" level=error msg="Failed to destroy network for sandbox \"c79b3d03107199bb202ae348aaab6540c60e1e99005fb84cd87892df1c3727be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.375999 containerd[1583]: time="2025-09-10T05:19:47.375941508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-262kf,Uid:b113321a-1b4c-49e8-b708-91fe4d366899,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c79b3d03107199bb202ae348aaab6540c60e1e99005fb84cd87892df1c3727be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.376284 containerd[1583]: time="2025-09-10T05:19:47.376255216Z" level=info msg="CreateContainer within sandbox \"55798f02d7aefc31aa66e648f409000da6eef5baa315978401619106972ac777\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92\"" Sep 10 05:19:47.376353 kubelet[2726]: E0910 05:19:47.376269 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c79b3d03107199bb202ae348aaab6540c60e1e99005fb84cd87892df1c3727be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.376447 kubelet[2726]: E0910 05:19:47.376343 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c79b3d03107199bb202ae348aaab6540c60e1e99005fb84cd87892df1c3727be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:47.376478 kubelet[2726]: E0910 05:19:47.376456 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c79b3d03107199bb202ae348aaab6540c60e1e99005fb84cd87892df1c3727be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-262kf" Sep 10 05:19:47.376534 kubelet[2726]: E0910 05:19:47.376506 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-262kf_calico-system(b113321a-1b4c-49e8-b708-91fe4d366899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-262kf_calico-system(b113321a-1b4c-49e8-b708-91fe4d366899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c79b3d03107199bb202ae348aaab6540c60e1e99005fb84cd87892df1c3727be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-262kf" podUID="b113321a-1b4c-49e8-b708-91fe4d366899" Sep 10 05:19:47.376816 containerd[1583]: time="2025-09-10T05:19:47.376773819Z" level=info msg="StartContainer for \"9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92\"" Sep 10 05:19:47.378196 containerd[1583]: time="2025-09-10T05:19:47.378172183Z" level=info msg="connecting to shim 9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92" address="unix:///run/containerd/s/7cbbae102601ca532cd597be346c529d7bfa7aabc8c54193d318ad2d9d981517" protocol=ttrpc version=3 Sep 10 05:19:47.395972 containerd[1583]: time="2025-09-10T05:19:47.395914997Z" level=error msg="Failed to destroy network for sandbox \"da340549e03a30d8f05f44a4be389315d3eee048ac8c1a8c8414bbb442b2c7c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.397664 containerd[1583]: time="2025-09-10T05:19:47.397616761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7zs56,Uid:04d2b8bc-38c8-44b4-929b-263f46e6af1a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da340549e03a30d8f05f44a4be389315d3eee048ac8c1a8c8414bbb442b2c7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.397845 kubelet[2726]: E0910 05:19:47.397809 2726 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da340549e03a30d8f05f44a4be389315d3eee048ac8c1a8c8414bbb442b2c7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 05:19:47.397908 kubelet[2726]: E0910 05:19:47.397870 2726 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da340549e03a30d8f05f44a4be389315d3eee048ac8c1a8c8414bbb442b2c7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" Sep 10 05:19:47.397908 kubelet[2726]: E0910 05:19:47.397890 2726 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da340549e03a30d8f05f44a4be389315d3eee048ac8c1a8c8414bbb442b2c7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" Sep 10 05:19:47.397998 kubelet[2726]: E0910 05:19:47.397928 2726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b879dbc57-7zs56_calico-apiserver(04d2b8bc-38c8-44b4-929b-263f46e6af1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b879dbc57-7zs56_calico-apiserver(04d2b8bc-38c8-44b4-929b-263f46e6af1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da340549e03a30d8f05f44a4be389315d3eee048ac8c1a8c8414bbb442b2c7c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" podUID="04d2b8bc-38c8-44b4-929b-263f46e6af1a" Sep 10 05:19:47.407651 systemd[1]: Started cri-containerd-9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92.scope - libcontainer container 9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92. Sep 10 05:19:47.459167 containerd[1583]: time="2025-09-10T05:19:47.459123377Z" level=info msg="StartContainer for \"9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92\" returns successfully" Sep 10 05:19:47.529522 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 05:19:47.530379 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 05:19:47.649993 kubelet[2726]: I0910 05:19:47.649842 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-backend-key-pair\") pod \"d525cd80-8783-41ea-936c-c0e9bf80d085\" (UID: \"d525cd80-8783-41ea-936c-c0e9bf80d085\") " Sep 10 05:19:47.649993 kubelet[2726]: I0910 05:19:47.649883 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-ca-bundle\") pod \"d525cd80-8783-41ea-936c-c0e9bf80d085\" (UID: \"d525cd80-8783-41ea-936c-c0e9bf80d085\") " Sep 10 05:19:47.649993 kubelet[2726]: I0910 05:19:47.649916 2726 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwpmf\" (UniqueName: \"kubernetes.io/projected/d525cd80-8783-41ea-936c-c0e9bf80d085-kube-api-access-vwpmf\") pod \"d525cd80-8783-41ea-936c-c0e9bf80d085\" (UID: \"d525cd80-8783-41ea-936c-c0e9bf80d085\") " Sep 10 05:19:47.650951 kubelet[2726]: I0910 05:19:47.650918 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d525cd80-8783-41ea-936c-c0e9bf80d085" (UID: "d525cd80-8783-41ea-936c-c0e9bf80d085"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 05:19:47.654350 kubelet[2726]: I0910 05:19:47.654118 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d525cd80-8783-41ea-936c-c0e9bf80d085" (UID: "d525cd80-8783-41ea-936c-c0e9bf80d085"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 05:19:47.654502 kubelet[2726]: I0910 05:19:47.654426 2726 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d525cd80-8783-41ea-936c-c0e9bf80d085-kube-api-access-vwpmf" (OuterVolumeSpecName: "kube-api-access-vwpmf") pod "d525cd80-8783-41ea-936c-c0e9bf80d085" (UID: "d525cd80-8783-41ea-936c-c0e9bf80d085"). InnerVolumeSpecName "kube-api-access-vwpmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 05:19:47.750885 kubelet[2726]: I0910 05:19:47.750829 2726 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwpmf\" (UniqueName: \"kubernetes.io/projected/d525cd80-8783-41ea-936c-c0e9bf80d085-kube-api-access-vwpmf\") on node \"localhost\" DevicePath \"\"" Sep 10 05:19:47.750885 kubelet[2726]: I0910 05:19:47.750868 2726 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 05:19:47.750885 kubelet[2726]: I0910 05:19:47.750879 2726 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d525cd80-8783-41ea-936c-c0e9bf80d085-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 05:19:48.305133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339327292.mount: Deactivated successfully. Sep 10 05:19:48.305235 systemd[1]: run-netns-cni\x2d5f4d2f8f\x2db73f\x2d099a\x2d9c55\x2d08a00cc6ab91.mount: Deactivated successfully. Sep 10 05:19:48.305311 systemd[1]: run-netns-cni\x2d99f908f7\x2d570b\x2d6449\x2d6dc0\x2d1079af16b6df.mount: Deactivated successfully. Sep 10 05:19:48.305384 systemd[1]: var-lib-kubelet-pods-d525cd80\x2d8783\x2d41ea\x2d936c\x2dc0e9bf80d085-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvwpmf.mount: Deactivated successfully. Sep 10 05:19:48.305465 systemd[1]: var-lib-kubelet-pods-d525cd80\x2d8783\x2d41ea\x2d936c\x2dc0e9bf80d085-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 05:19:48.411118 systemd[1]: Removed slice kubepods-besteffort-podd525cd80_8783_41ea_936c_c0e9bf80d085.slice - libcontainer container kubepods-besteffort-podd525cd80_8783_41ea_936c_c0e9bf80d085.slice. Sep 10 05:19:48.583942 kubelet[2726]: I0910 05:19:48.583775 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-96fmq" podStartSLOduration=2.177106279 podStartE2EDuration="31.583757717s" podCreationTimestamp="2025-09-10 05:19:17 +0000 UTC" firstStartedPulling="2025-09-10 05:19:17.913620867 +0000 UTC m=+18.982428910" lastFinishedPulling="2025-09-10 05:19:47.320272306 +0000 UTC m=+48.389080348" observedRunningTime="2025-09-10 05:19:48.5833488 +0000 UTC m=+49.652156852" watchObservedRunningTime="2025-09-10 05:19:48.583757717 +0000 UTC m=+49.652565759" Sep 10 05:19:48.593938 containerd[1583]: time="2025-09-10T05:19:48.593878117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92\" id:\"b9c8c3c95d5b3044073e3f359a622f94b3adb7ff618493dc08670f7bf32b6adc\" pid:4091 exit_status:1 exited_at:{seconds:1757481588 nanos:593508253}" Sep 10 05:19:48.945861 systemd[1]: Created slice kubepods-besteffort-podf8364e86_7942_4db4_9de8_0f2f1f0e65df.slice - libcontainer container kubepods-besteffort-podf8364e86_7942_4db4_9de8_0f2f1f0e65df.slice. Sep 10 05:19:48.959527 kubelet[2726]: I0910 05:19:48.959423 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsg78\" (UniqueName: \"kubernetes.io/projected/f8364e86-7942-4db4-9de8-0f2f1f0e65df-kube-api-access-dsg78\") pod \"whisker-5d74897c75-44fkv\" (UID: \"f8364e86-7942-4db4-9de8-0f2f1f0e65df\") " pod="calico-system/whisker-5d74897c75-44fkv" Sep 10 05:19:48.959748 kubelet[2726]: I0910 05:19:48.959724 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8364e86-7942-4db4-9de8-0f2f1f0e65df-whisker-backend-key-pair\") pod \"whisker-5d74897c75-44fkv\" (UID: \"f8364e86-7942-4db4-9de8-0f2f1f0e65df\") " pod="calico-system/whisker-5d74897c75-44fkv" Sep 10 05:19:48.960193 kubelet[2726]: I0910 05:19:48.960168 2726 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8364e86-7942-4db4-9de8-0f2f1f0e65df-whisker-ca-bundle\") pod \"whisker-5d74897c75-44fkv\" (UID: \"f8364e86-7942-4db4-9de8-0f2f1f0e65df\") " pod="calico-system/whisker-5d74897c75-44fkv" Sep 10 05:19:49.052088 containerd[1583]: time="2025-09-10T05:19:49.052001920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7t9v5,Uid:c18d3146-223b-4b5c-9fad-603cd0d8e559,Namespace:calico-apiserver,Attempt:0,}" Sep 10 05:19:49.054108 kubelet[2726]: I0910 05:19:49.053662 2726 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d525cd80-8783-41ea-936c-c0e9bf80d085" path="/var/lib/kubelet/pods/d525cd80-8783-41ea-936c-c0e9bf80d085/volumes" Sep 10 05:19:49.054589 containerd[1583]: time="2025-09-10T05:19:49.054476354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8674dc7db6-njx5w,Uid:d4c0eba5-dbb0-4862-b63c-73f11b54cf28,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:49.256815 containerd[1583]: time="2025-09-10T05:19:49.256754947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d74897c75-44fkv,Uid:f8364e86-7942-4db4-9de8-0f2f1f0e65df,Namespace:calico-system,Attempt:0,}" Sep 10 05:19:49.377764 systemd-networkd[1509]: cali7f0dc75bfac: Link UP Sep 10 05:19:49.377998 systemd-networkd[1509]: cali7f0dc75bfac: Gained carrier Sep 10 05:19:49.391785 containerd[1583]: 2025-09-10 05:19:49.144 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0 calico-apiserver-5b879dbc57- calico-apiserver c18d3146-223b-4b5c-9fad-603cd0d8e559 826 0 2025-09-10 05:19:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b879dbc57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b879dbc57-7t9v5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7f0dc75bfac [] [] }} ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-" Sep 10 05:19:49.391785 containerd[1583]: 2025-09-10 05:19:49.145 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" Sep 10 05:19:49.391785 containerd[1583]: 2025-09-10 05:19:49.311 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" HandleID="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Workload="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.313 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" HandleID="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Workload="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd5b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b879dbc57-7t9v5", "timestamp":"2025-09-10 05:19:49.311265858 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.313 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.315 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.316 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.329 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" host="localhost" Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.337 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.341 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.344 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.349 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:49.392069 containerd[1583]: 2025-09-10 05:19:49.349 [INFO][4265] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" host="localhost" Sep 10 05:19:49.392380 containerd[1583]: 2025-09-10 05:19:49.352 [INFO][4265] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec Sep 10 05:19:49.392380 containerd[1583]: 2025-09-10 05:19:49.356 [INFO][4265] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" host="localhost" Sep 10 05:19:49.392380 containerd[1583]: 2025-09-10 05:19:49.362 [INFO][4265] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" host="localhost" Sep 10 05:19:49.392380 containerd[1583]: 2025-09-10 05:19:49.362 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" host="localhost" Sep 10 05:19:49.392380 containerd[1583]: 2025-09-10 05:19:49.362 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:19:49.392380 containerd[1583]: 2025-09-10 05:19:49.362 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" HandleID="k8s-pod-network.fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Workload="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" Sep 10 05:19:49.392567 containerd[1583]: 2025-09-10 05:19:49.367 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0", GenerateName:"calico-apiserver-5b879dbc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"c18d3146-223b-4b5c-9fad-603cd0d8e559", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b879dbc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b879dbc57-7t9v5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f0dc75bfac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:49.392642 containerd[1583]: 2025-09-10 05:19:49.367 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" Sep 10 05:19:49.392642 containerd[1583]: 2025-09-10 05:19:49.368 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f0dc75bfac ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" Sep 10 05:19:49.392642 containerd[1583]: 2025-09-10 05:19:49.377 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" Sep 10 05:19:49.392737 containerd[1583]: 2025-09-10 05:19:49.377 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0", GenerateName:"calico-apiserver-5b879dbc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"c18d3146-223b-4b5c-9fad-603cd0d8e559", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b879dbc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec", Pod:"calico-apiserver-5b879dbc57-7t9v5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f0dc75bfac", MAC:"d2:56:71:40:e6:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:49.392816 containerd[1583]: 2025-09-10 05:19:49.388 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7t9v5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7t9v5-eth0" Sep 10 05:19:49.499977 systemd-networkd[1509]: cali1c95bcb612e: Link UP Sep 10 05:19:49.502081 systemd-networkd[1509]: cali1c95bcb612e: Gained carrier Sep 10 05:19:49.527574 containerd[1583]: time="2025-09-10T05:19:49.526851681Z" level=info msg="connecting to shim fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec" address="unix:///run/containerd/s/4cc265ca7ceb1fd3196d4eaa5ce929008f8b4a5c88f3a6a169c84bb95efb0fdb" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:49.559446 containerd[1583]: time="2025-09-10T05:19:49.559385095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92\" id:\"feeaad6216734f3ed9c20edf4bcab6012d8065cf089f13d14454a99bf803843d\" pid:4323 exit_status:1 exited_at:{seconds:1757481589 nanos:556813700}" Sep 10 05:19:49.563729 systemd[1]: Started cri-containerd-fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec.scope - libcontainer container fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec. Sep 10 05:19:49.566688 systemd-networkd[1509]: vxlan.calico: Link UP Sep 10 05:19:49.566711 systemd-networkd[1509]: vxlan.calico: Gained carrier Sep 10 05:19:49.587124 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:19:49.801160 containerd[1583]: time="2025-09-10T05:19:49.801002882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7t9v5,Uid:c18d3146-223b-4b5c-9fad-603cd0d8e559,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec\"" Sep 10 05:19:49.802092 containerd[1583]: 2025-09-10 05:19:49.144 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0 calico-kube-controllers-8674dc7db6- calico-system d4c0eba5-dbb0-4862-b63c-73f11b54cf28 822 0 2025-09-10 05:19:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8674dc7db6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8674dc7db6-njx5w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c95bcb612e [] [] }} ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-" Sep 10 05:19:49.802092 containerd[1583]: 2025-09-10 05:19:49.145 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" Sep 10 05:19:49.802092 containerd[1583]: 2025-09-10 05:19:49.316 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" HandleID="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Workload="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.316 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" HandleID="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Workload="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8674dc7db6-njx5w", "timestamp":"2025-09-10 05:19:49.316024858 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.316 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.362 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.362 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.430 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" host="localhost" Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.439 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.445 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.448 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.451 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:49.802846 containerd[1583]: 2025-09-10 05:19:49.452 [INFO][4263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" host="localhost" Sep 10 05:19:49.803792 containerd[1583]: 2025-09-10 05:19:49.455 [INFO][4263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a Sep 10 05:19:49.803792 containerd[1583]: 2025-09-10 05:19:49.468 [INFO][4263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" host="localhost" Sep 10 05:19:49.803792 containerd[1583]: 2025-09-10 05:19:49.477 [INFO][4263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" host="localhost" Sep 10 05:19:49.803792 containerd[1583]: 2025-09-10 05:19:49.478 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" host="localhost" Sep 10 05:19:49.803792 containerd[1583]: 2025-09-10 05:19:49.479 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:19:49.803792 containerd[1583]: 2025-09-10 05:19:49.479 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" HandleID="k8s-pod-network.4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Workload="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" Sep 10 05:19:49.804152 containerd[1583]: 2025-09-10 05:19:49.493 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0", GenerateName:"calico-kube-controllers-8674dc7db6-", Namespace:"calico-system", SelfLink:"", UID:"d4c0eba5-dbb0-4862-b63c-73f11b54cf28", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8674dc7db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8674dc7db6-njx5w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c95bcb612e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:49.804349 containerd[1583]: 2025-09-10 05:19:49.494 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" Sep 10 05:19:49.804349 containerd[1583]: 2025-09-10 05:19:49.494 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c95bcb612e ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" Sep 10 05:19:49.804349 containerd[1583]: 2025-09-10 05:19:49.504 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" Sep 10 05:19:49.804541 containerd[1583]: 2025-09-10 05:19:49.506 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0", GenerateName:"calico-kube-controllers-8674dc7db6-", Namespace:"calico-system", SelfLink:"", UID:"d4c0eba5-dbb0-4862-b63c-73f11b54cf28", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8674dc7db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a", Pod:"calico-kube-controllers-8674dc7db6-njx5w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c95bcb612e", MAC:"12:36:a3:cb:7a:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:49.804750 containerd[1583]: 2025-09-10 05:19:49.797 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" Namespace="calico-system" Pod="calico-kube-controllers-8674dc7db6-njx5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8674dc7db6--njx5w-eth0" Sep 10 05:19:49.805427 containerd[1583]: time="2025-09-10T05:19:49.805399342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 05:19:50.049652 kubelet[2726]: E0910 05:19:50.049610 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:50.050167 containerd[1583]: time="2025-09-10T05:19:50.049977882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntfnx,Uid:1e701f52-35b0-48d7-9e5d-b37cc64e28ac,Namespace:kube-system,Attempt:0,}" Sep 10 05:19:50.370164 systemd-networkd[1509]: calibc212e60463: Link UP Sep 10 05:19:50.374007 systemd-networkd[1509]: calibc212e60463: Gained carrier Sep 10 05:19:50.511598 containerd[1583]: 2025-09-10 05:19:49.324 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d74897c75--44fkv-eth0 whisker-5d74897c75- calico-system f8364e86-7942-4db4-9de8-0f2f1f0e65df 967 0 2025-09-10 05:19:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d74897c75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d74897c75-44fkv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibc212e60463 [] [] }} ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-" Sep 10 05:19:50.511598 containerd[1583]: 2025-09-10 05:19:49.324 [INFO][4280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-eth0" Sep 10 05:19:50.511598 containerd[1583]: 2025-09-10 05:19:49.363 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" HandleID="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Workload="localhost-k8s-whisker--5d74897c75--44fkv-eth0" Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.363 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" HandleID="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Workload="localhost-k8s-whisker--5d74897c75--44fkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d74897c75-44fkv", "timestamp":"2025-09-10 05:19:49.363824506 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.364 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.478 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.478 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.798 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" host="localhost" Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.885 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.891 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:49.892 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:50.006 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:50.511870 containerd[1583]: 2025-09-10 05:19:50.007 [INFO][4296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" host="localhost" Sep 10 05:19:50.512197 containerd[1583]: 2025-09-10 05:19:50.008 [INFO][4296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65 Sep 10 05:19:50.512197 containerd[1583]: 2025-09-10 05:19:50.082 [INFO][4296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" host="localhost" Sep 10 05:19:50.512197 containerd[1583]: 2025-09-10 05:19:50.359 [INFO][4296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" host="localhost" Sep 10 05:19:50.512197 containerd[1583]: 2025-09-10 05:19:50.359 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" host="localhost" Sep 10 05:19:50.512197 containerd[1583]: 2025-09-10 05:19:50.359 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:19:50.512197 containerd[1583]: 2025-09-10 05:19:50.359 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" HandleID="k8s-pod-network.94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Workload="localhost-k8s-whisker--5d74897c75--44fkv-eth0" Sep 10 05:19:50.512370 containerd[1583]: 2025-09-10 05:19:50.363 [INFO][4280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d74897c75--44fkv-eth0", GenerateName:"whisker-5d74897c75-", Namespace:"calico-system", SelfLink:"", UID:"f8364e86-7942-4db4-9de8-0f2f1f0e65df", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d74897c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d74897c75-44fkv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc212e60463", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:50.512370 containerd[1583]: 2025-09-10 05:19:50.363 [INFO][4280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-eth0" Sep 10 05:19:50.512441 containerd[1583]: 2025-09-10 05:19:50.364 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc212e60463 ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-eth0" Sep 10 05:19:50.512441 containerd[1583]: 2025-09-10 05:19:50.375 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-eth0" Sep 10 05:19:50.512560 containerd[1583]: 2025-09-10 05:19:50.377 [INFO][4280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d74897c75--44fkv-eth0", GenerateName:"whisker-5d74897c75-", Namespace:"calico-system", SelfLink:"", UID:"f8364e86-7942-4db4-9de8-0f2f1f0e65df", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d74897c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65", Pod:"whisker-5d74897c75-44fkv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc212e60463", MAC:"9a:28:af:20:26:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:50.512609 containerd[1583]: 2025-09-10 05:19:50.507 [INFO][4280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" Namespace="calico-system" Pod="whisker-5d74897c75-44fkv" WorkloadEndpoint="localhost-k8s-whisker--5d74897c75--44fkv-eth0" Sep 10 05:19:50.698702 systemd-networkd[1509]: cali1c95bcb612e: Gained IPv6LL Sep 10 05:19:50.762706 systemd-networkd[1509]: cali7f0dc75bfac: Gained IPv6LL Sep 10 05:19:50.873519 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:37848.service - OpenSSH per-connection server daemon (10.0.0.1:37848). Sep 10 05:19:50.970061 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 37848 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:19:50.972729 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:19:50.978888 systemd-logind[1563]: New session 10 of user core. Sep 10 05:19:50.984624 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 05:19:51.195633 systemd-networkd[1509]: cali9daa390c672: Link UP Sep 10 05:19:51.195848 systemd-networkd[1509]: cali9daa390c672: Gained carrier Sep 10 05:19:51.594749 systemd-networkd[1509]: vxlan.calico: Gained IPv6LL Sep 10 05:19:51.807226 containerd[1583]: 2025-09-10 05:19:50.909 [INFO][4484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0 coredns-7c65d6cfc9- kube-system 1e701f52-35b0-48d7-9e5d-b37cc64e28ac 817 0 2025-09-10 05:19:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-ntfnx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9daa390c672 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-" Sep 10 05:19:51.807226 containerd[1583]: 2025-09-10 05:19:50.910 [INFO][4484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" Sep 10 05:19:51.807226 containerd[1583]: 2025-09-10 05:19:50.958 [INFO][4498] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" HandleID="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Workload="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.959 [INFO][4498] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" HandleID="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Workload="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003afad0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-ntfnx", "timestamp":"2025-09-10 05:19:50.958366012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.959 [INFO][4498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.959 [INFO][4498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.959 [INFO][4498] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.973 [INFO][4498] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" host="localhost" Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.978 [INFO][4498] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.982 [INFO][4498] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.985 [INFO][4498] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.988 [INFO][4498] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:19:51.807739 containerd[1583]: 2025-09-10 05:19:50.988 [INFO][4498] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" host="localhost" Sep 10 05:19:51.807949 containerd[1583]: 2025-09-10 05:19:50.990 [INFO][4498] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6 Sep 10 05:19:51.807949 containerd[1583]: 2025-09-10 05:19:51.016 [INFO][4498] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" host="localhost" Sep 10 05:19:51.807949 containerd[1583]: 2025-09-10 05:19:51.189 [INFO][4498] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" host="localhost" Sep 10 05:19:51.807949 containerd[1583]: 2025-09-10 05:19:51.189 [INFO][4498] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" host="localhost" Sep 10 05:19:51.807949 containerd[1583]: 2025-09-10 05:19:51.189 [INFO][4498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:19:51.807949 containerd[1583]: 2025-09-10 05:19:51.189 [INFO][4498] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" HandleID="k8s-pod-network.aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Workload="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" Sep 10 05:19:51.808069 containerd[1583]: 2025-09-10 05:19:51.192 [INFO][4484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e701f52-35b0-48d7-9e5d-b37cc64e28ac", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-ntfnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9daa390c672", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:51.808137 containerd[1583]: 2025-09-10 05:19:51.192 [INFO][4484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" Sep 10 05:19:51.808137 containerd[1583]: 2025-09-10 05:19:51.192 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9daa390c672 ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" Sep 10 05:19:51.808137 containerd[1583]: 2025-09-10 05:19:51.195 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" Sep 10 05:19:51.808216 containerd[1583]: 2025-09-10 05:19:51.195 [INFO][4484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e701f52-35b0-48d7-9e5d-b37cc64e28ac", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6", Pod:"coredns-7c65d6cfc9-ntfnx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9daa390c672", MAC:"96:0d:bd:82:9c:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:19:51.808216 containerd[1583]: 2025-09-10 05:19:51.802 [INFO][4484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ntfnx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ntfnx-eth0" Sep 10 05:19:51.811321 sshd[4508]: Connection closed by 10.0.0.1 port 37848 Sep 10 05:19:51.809906 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Sep 10 05:19:51.814448 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Sep 10 05:19:51.814730 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:37848.service: Deactivated successfully. Sep 10 05:19:51.816910 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 05:19:51.818714 systemd-logind[1563]: Removed session 10. Sep 10 05:19:51.978682 systemd-networkd[1509]: calibc212e60463: Gained IPv6LL Sep 10 05:19:53.002656 systemd-networkd[1509]: cali9daa390c672: Gained IPv6LL Sep 10 05:19:54.356033 containerd[1583]: time="2025-09-10T05:19:54.355981774Z" level=info msg="connecting to shim 4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a" address="unix:///run/containerd/s/85ed95412c9f5f86d0521c2f4b3e60d2547c7bac57ae602b343d0b3e9d0571ad" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:54.393626 systemd[1]: Started cri-containerd-4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a.scope - libcontainer container 4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a. Sep 10 05:19:54.405337 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:19:55.234703 containerd[1583]: time="2025-09-10T05:19:55.234657080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8674dc7db6-njx5w,Uid:d4c0eba5-dbb0-4862-b63c-73f11b54cf28,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a\"" Sep 10 05:19:55.691615 containerd[1583]: time="2025-09-10T05:19:55.691572625Z" level=info msg="connecting to shim 94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65" address="unix:///run/containerd/s/b1efcc94f7b14c87d0e1a8d94c8194e7ce3c153b0d08fddb6e819f60b6020ba6" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:55.707686 containerd[1583]: time="2025-09-10T05:19:55.707612927Z" level=info msg="connecting to shim aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6" address="unix:///run/containerd/s/205dabe1294475db637c98c0f16d06bfc4ec03c2a5952d36e983105d486b00bc" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:19:55.737646 systemd[1]: Started cri-containerd-94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65.scope - libcontainer container 94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65. Sep 10 05:19:55.741051 systemd[1]: Started cri-containerd-aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6.scope - libcontainer container aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6. Sep 10 05:19:55.752033 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:19:55.760681 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:19:56.034567 containerd[1583]: time="2025-09-10T05:19:56.034524651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d74897c75-44fkv,Uid:f8364e86-7942-4db4-9de8-0f2f1f0e65df,Namespace:calico-system,Attempt:0,} returns sandbox id \"94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65\"" Sep 10 05:19:56.050106 containerd[1583]: time="2025-09-10T05:19:56.049770101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ntfnx,Uid:1e701f52-35b0-48d7-9e5d-b37cc64e28ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6\"" Sep 10 05:19:56.063243 kubelet[2726]: E0910 05:19:56.063207 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:56.064955 containerd[1583]: time="2025-09-10T05:19:56.064917087Z" level=info msg="CreateContainer within sandbox \"aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 05:19:56.824240 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:37854.service - OpenSSH per-connection server daemon (10.0.0.1:37854). Sep 10 05:19:56.862950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487585280.mount: Deactivated successfully. Sep 10 05:19:56.877177 sshd[4675]: Accepted publickey for core from 10.0.0.1 port 37854 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:19:56.946089 sshd-session[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:19:56.947872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165534862.mount: Deactivated successfully. Sep 10 05:19:56.952201 systemd-logind[1563]: New session 11 of user core. Sep 10 05:19:56.987091 containerd[1583]: time="2025-09-10T05:19:56.986805134Z" level=info msg="Container c06cba18aaeb4df8670219714510f0dc90a34894a692992275f1bc522643eb10: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:56.958601 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 05:19:57.134855 sshd[4680]: Connection closed by 10.0.0.1 port 37854 Sep 10 05:19:57.135143 sshd-session[4675]: pam_unix(sshd:session): session closed for user core Sep 10 05:19:57.138885 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:37854.service: Deactivated successfully. Sep 10 05:19:57.140876 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 05:19:57.142313 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Sep 10 05:19:57.143531 systemd-logind[1563]: Removed session 11. Sep 10 05:19:57.334998 containerd[1583]: time="2025-09-10T05:19:57.334926401Z" level=info msg="CreateContainer within sandbox \"aeb99ea7ceb7fef7a7335963375f36e6c294a30212fbe73e1286d57e5ce2f9c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c06cba18aaeb4df8670219714510f0dc90a34894a692992275f1bc522643eb10\"" Sep 10 05:19:57.336523 containerd[1583]: time="2025-09-10T05:19:57.335651952Z" level=info msg="StartContainer for \"c06cba18aaeb4df8670219714510f0dc90a34894a692992275f1bc522643eb10\"" Sep 10 05:19:57.337133 containerd[1583]: time="2025-09-10T05:19:57.337103775Z" level=info msg="connecting to shim c06cba18aaeb4df8670219714510f0dc90a34894a692992275f1bc522643eb10" address="unix:///run/containerd/s/205dabe1294475db637c98c0f16d06bfc4ec03c2a5952d36e983105d486b00bc" protocol=ttrpc version=3 Sep 10 05:19:57.360636 systemd[1]: Started cri-containerd-c06cba18aaeb4df8670219714510f0dc90a34894a692992275f1bc522643eb10.scope - libcontainer container c06cba18aaeb4df8670219714510f0dc90a34894a692992275f1bc522643eb10. Sep 10 05:19:57.634363 containerd[1583]: time="2025-09-10T05:19:57.634317562Z" level=info msg="StartContainer for \"c06cba18aaeb4df8670219714510f0dc90a34894a692992275f1bc522643eb10\" returns successfully" Sep 10 05:19:58.417158 kubelet[2726]: E0910 05:19:58.417125 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:58.598191 kubelet[2726]: I0910 05:19:58.597918 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ntfnx" podStartSLOduration=53.597900828 podStartE2EDuration="53.597900828s" podCreationTimestamp="2025-09-10 05:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:19:58.597580428 +0000 UTC m=+59.666388490" watchObservedRunningTime="2025-09-10 05:19:58.597900828 +0000 UTC m=+59.666708871" Sep 10 05:19:59.418648 kubelet[2726]: E0910 05:19:59.418605 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:19:59.581760 containerd[1583]: time="2025-09-10T05:19:59.581710192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:59.584285 containerd[1583]: time="2025-09-10T05:19:59.584254816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 10 05:19:59.585851 containerd[1583]: time="2025-09-10T05:19:59.585822117Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:59.591115 containerd[1583]: time="2025-09-10T05:19:59.591083546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:19:59.591830 containerd[1583]: time="2025-09-10T05:19:59.591777398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 9.786202487s" Sep 10 05:19:59.591830 containerd[1583]: time="2025-09-10T05:19:59.591828273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 05:19:59.593321 containerd[1583]: time="2025-09-10T05:19:59.592917676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 05:19:59.594642 containerd[1583]: time="2025-09-10T05:19:59.594609620Z" level=info msg="CreateContainer within sandbox \"fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 05:19:59.607533 containerd[1583]: time="2025-09-10T05:19:59.607175924Z" level=info msg="Container 0e31832560ab72a057ec08f9a1d824caef35ca8f88dc8b794a47fc4a976b6d54: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:19:59.616952 containerd[1583]: time="2025-09-10T05:19:59.616915063Z" level=info msg="CreateContainer within sandbox \"fe04c7cc1e18249be3b25bd17f0cf82d2807ca7832b73a4589a3e5db9c7de0ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e31832560ab72a057ec08f9a1d824caef35ca8f88dc8b794a47fc4a976b6d54\"" Sep 10 05:19:59.617551 containerd[1583]: time="2025-09-10T05:19:59.617509187Z" level=info msg="StartContainer for \"0e31832560ab72a057ec08f9a1d824caef35ca8f88dc8b794a47fc4a976b6d54\"" Sep 10 05:19:59.618739 containerd[1583]: time="2025-09-10T05:19:59.618712164Z" level=info msg="connecting to shim 0e31832560ab72a057ec08f9a1d824caef35ca8f88dc8b794a47fc4a976b6d54" address="unix:///run/containerd/s/4cc265ca7ceb1fd3196d4eaa5ce929008f8b4a5c88f3a6a169c84bb95efb0fdb" protocol=ttrpc version=3 Sep 10 05:19:59.644680 systemd[1]: Started cri-containerd-0e31832560ab72a057ec08f9a1d824caef35ca8f88dc8b794a47fc4a976b6d54.scope - libcontainer container 0e31832560ab72a057ec08f9a1d824caef35ca8f88dc8b794a47fc4a976b6d54. Sep 10 05:19:59.692849 containerd[1583]: time="2025-09-10T05:19:59.692726596Z" level=info msg="StartContainer for \"0e31832560ab72a057ec08f9a1d824caef35ca8f88dc8b794a47fc4a976b6d54\" returns successfully" Sep 10 05:20:00.050105 containerd[1583]: time="2025-09-10T05:20:00.050036592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-262kf,Uid:b113321a-1b4c-49e8-b708-91fe4d366899,Namespace:calico-system,Attempt:0,}" Sep 10 05:20:00.173721 systemd-networkd[1509]: cali211c5c19ee2: Link UP Sep 10 05:20:00.174254 systemd-networkd[1509]: cali211c5c19ee2: Gained carrier Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.095 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--262kf-eth0 goldmane-7988f88666- calico-system b113321a-1b4c-49e8-b708-91fe4d366899 828 0 2025-09-10 05:19:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-262kf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali211c5c19ee2 [] [] }} ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.096 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-eth0" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.128 [INFO][4799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" HandleID="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Workload="localhost-k8s-goldmane--7988f88666--262kf-eth0" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.130 [INFO][4799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" HandleID="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Workload="localhost-k8s-goldmane--7988f88666--262kf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-262kf", "timestamp":"2025-09-10 05:20:00.128912161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.130 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.130 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.130 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.137 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.142 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.149 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.151 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.153 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.153 [INFO][4799] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.154 [INFO][4799] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.158 [INFO][4799] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.165 [INFO][4799] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.166 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" host="localhost" Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.166 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:20:00.193842 containerd[1583]: 2025-09-10 05:20:00.166 [INFO][4799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" HandleID="k8s-pod-network.8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Workload="localhost-k8s-goldmane--7988f88666--262kf-eth0" Sep 10 05:20:00.194392 containerd[1583]: 2025-09-10 05:20:00.171 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--262kf-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b113321a-1b4c-49e8-b708-91fe4d366899", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-262kf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali211c5c19ee2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:00.194392 containerd[1583]: 2025-09-10 05:20:00.171 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-eth0" Sep 10 05:20:00.194392 containerd[1583]: 2025-09-10 05:20:00.171 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali211c5c19ee2 ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-eth0" Sep 10 05:20:00.194392 containerd[1583]: 2025-09-10 05:20:00.174 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-eth0" Sep 10 05:20:00.194392 containerd[1583]: 2025-09-10 05:20:00.174 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--262kf-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b113321a-1b4c-49e8-b708-91fe4d366899", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa", Pod:"goldmane-7988f88666-262kf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali211c5c19ee2", MAC:"26:75:3b:1e:b1:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:00.194392 containerd[1583]: 2025-09-10 05:20:00.182 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" Namespace="calico-system" Pod="goldmane-7988f88666-262kf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--262kf-eth0" Sep 10 05:20:00.228010 containerd[1583]: time="2025-09-10T05:20:00.227943317Z" level=info msg="connecting to shim 8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa" address="unix:///run/containerd/s/43b800b81f75552c33986e013f9d878b7d05bb293df6b55478857e0b38305871" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:20:00.261050 systemd[1]: Started cri-containerd-8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa.scope - libcontainer container 8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa. Sep 10 05:20:00.288478 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:20:00.327831 containerd[1583]: time="2025-09-10T05:20:00.327696869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-262kf,Uid:b113321a-1b4c-49e8-b708-91fe4d366899,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa\"" Sep 10 05:20:00.424997 kubelet[2726]: E0910 05:20:00.424936 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:01.049902 containerd[1583]: time="2025-09-10T05:20:01.049840244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vhs9,Uid:4e653afd-d957-40d1-839d-ca0bc8c42646,Namespace:calico-system,Attempt:0,}" Sep 10 05:20:01.224235 systemd-networkd[1509]: cali365e3317490: Link UP Sep 10 05:20:01.224755 systemd-networkd[1509]: cali365e3317490: Gained carrier Sep 10 05:20:01.236855 kubelet[2726]: I0910 05:20:01.236630 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b879dbc57-7t9v5" podStartSLOduration=37.448966078 podStartE2EDuration="47.236582918s" podCreationTimestamp="2025-09-10 05:19:14 +0000 UTC" firstStartedPulling="2025-09-10 05:19:49.805130477 +0000 UTC m=+50.873938519" lastFinishedPulling="2025-09-10 05:19:59.592747317 +0000 UTC m=+60.661555359" observedRunningTime="2025-09-10 05:20:00.480378495 +0000 UTC m=+61.549186557" watchObservedRunningTime="2025-09-10 05:20:01.236582918 +0000 UTC m=+62.305390960" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.163 [INFO][4867] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6vhs9-eth0 csi-node-driver- calico-system 4e653afd-d957-40d1-839d-ca0bc8c42646 703 0 2025-09-10 05:19:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6vhs9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali365e3317490 [] [] }} ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.163 [INFO][4867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-eth0" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.187 [INFO][4881] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" HandleID="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Workload="localhost-k8s-csi--node--driver--6vhs9-eth0" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.187 [INFO][4881] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" HandleID="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Workload="localhost-k8s-csi--node--driver--6vhs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c71c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6vhs9", "timestamp":"2025-09-10 05:20:01.187541244 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.187 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.187 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.187 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.194 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.198 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.202 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.203 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.205 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.205 [INFO][4881] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.207 [INFO][4881] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.212 [INFO][4881] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.218 [INFO][4881] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.218 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" host="localhost" Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.218 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:20:01.239904 containerd[1583]: 2025-09-10 05:20:01.218 [INFO][4881] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" HandleID="k8s-pod-network.1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Workload="localhost-k8s-csi--node--driver--6vhs9-eth0" Sep 10 05:20:01.240594 containerd[1583]: 2025-09-10 05:20:01.221 [INFO][4867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vhs9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e653afd-d957-40d1-839d-ca0bc8c42646", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6vhs9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali365e3317490", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:01.240594 containerd[1583]: 2025-09-10 05:20:01.222 [INFO][4867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-eth0" Sep 10 05:20:01.240594 containerd[1583]: 2025-09-10 05:20:01.222 [INFO][4867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali365e3317490 ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-eth0" Sep 10 05:20:01.240594 containerd[1583]: 2025-09-10 05:20:01.224 [INFO][4867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-eth0" Sep 10 05:20:01.240594 containerd[1583]: 2025-09-10 05:20:01.225 [INFO][4867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6vhs9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e653afd-d957-40d1-839d-ca0bc8c42646", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf", Pod:"csi-node-driver-6vhs9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali365e3317490", MAC:"fa:97:1f:1d:15:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:01.240594 containerd[1583]: 2025-09-10 05:20:01.236 [INFO][4867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" Namespace="calico-system" Pod="csi-node-driver-6vhs9" WorkloadEndpoint="localhost-k8s-csi--node--driver--6vhs9-eth0" Sep 10 05:20:01.269018 containerd[1583]: time="2025-09-10T05:20:01.268915607Z" level=info msg="connecting to shim 1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf" address="unix:///run/containerd/s/676741d46b3b5a0558130cae4d8bbf845bad565ac46ef6db6f954e8efa256372" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:20:01.293671 systemd[1]: Started cri-containerd-1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf.scope - libcontainer container 1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf. Sep 10 05:20:01.306849 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:20:01.321716 containerd[1583]: time="2025-09-10T05:20:01.321664183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6vhs9,Uid:4e653afd-d957-40d1-839d-ca0bc8c42646,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf\"" Sep 10 05:20:01.428018 kubelet[2726]: I0910 05:20:01.427976 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 05:20:01.428812 kubelet[2726]: E0910 05:20:01.428772 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:01.642797 systemd-networkd[1509]: cali211c5c19ee2: Gained IPv6LL Sep 10 05:20:02.050258 kubelet[2726]: E0910 05:20:02.050230 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:02.050661 containerd[1583]: time="2025-09-10T05:20:02.050566124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7zs56,Uid:04d2b8bc-38c8-44b4-929b-263f46e6af1a,Namespace:calico-apiserver,Attempt:0,}" Sep 10 05:20:02.050993 containerd[1583]: time="2025-09-10T05:20:02.050567186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g798j,Uid:059478a1-bd0b-4735-ac44-be87620e3fa4,Namespace:kube-system,Attempt:0,}" Sep 10 05:20:02.154792 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:42522.service - OpenSSH per-connection server daemon (10.0.0.1:42522). Sep 10 05:20:02.652932 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 42522 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:02.654004 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:02.691982 containerd[1583]: time="2025-09-10T05:20:02.676149563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92\" id:\"e5a8dd1f687436c875111787d29b39fabe7f4b2e25b3011da03ab34368fa3573\" pid:4956 exited_at:{seconds:1757481602 nanos:675733653}" Sep 10 05:20:02.658660 systemd-logind[1563]: New session 12 of user core. Sep 10 05:20:02.666610 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 05:20:02.978724 sshd[4973]: Connection closed by 10.0.0.1 port 42522 Sep 10 05:20:02.978823 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:02.983502 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:42522.service: Deactivated successfully. Sep 10 05:20:02.985423 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 05:20:02.986227 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Sep 10 05:20:02.987597 systemd-logind[1563]: Removed session 12. Sep 10 05:20:03.114726 systemd-networkd[1509]: cali365e3317490: Gained IPv6LL Sep 10 05:20:03.549507 systemd-networkd[1509]: calidc7ccb856dd: Link UP Sep 10 05:20:03.550123 systemd-networkd[1509]: calidc7ccb856dd: Gained carrier Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.415 [INFO][4996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--g798j-eth0 coredns-7c65d6cfc9- kube-system 059478a1-bd0b-4735-ac44-be87620e3fa4 827 0 2025-09-10 05:19:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-g798j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc7ccb856dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.415 [INFO][4996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.447 [INFO][5029] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" HandleID="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Workload="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.448 [INFO][5029] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" HandleID="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Workload="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-g798j", "timestamp":"2025-09-10 05:20:03.447739626 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.448 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.448 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.448 [INFO][5029] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.472 [INFO][5029] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.477 [INFO][5029] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.480 [INFO][5029] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.482 [INFO][5029] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.484 [INFO][5029] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.484 [INFO][5029] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.485 [INFO][5029] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1 Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.490 [INFO][5029] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.538 [INFO][5029] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.538 [INFO][5029] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" host="localhost" Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.539 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:20:03.600297 containerd[1583]: 2025-09-10 05:20:03.539 [INFO][5029] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" HandleID="k8s-pod-network.ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Workload="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" Sep 10 05:20:03.601425 containerd[1583]: 2025-09-10 05:20:03.546 [INFO][4996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g798j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"059478a1-bd0b-4735-ac44-be87620e3fa4", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-g798j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc7ccb856dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:03.601425 containerd[1583]: 2025-09-10 05:20:03.547 [INFO][4996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" Sep 10 05:20:03.601425 containerd[1583]: 2025-09-10 05:20:03.547 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc7ccb856dd ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" Sep 10 05:20:03.601425 containerd[1583]: 2025-09-10 05:20:03.551 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" Sep 10 05:20:03.601425 containerd[1583]: 2025-09-10 05:20:03.551 [INFO][4996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--g798j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"059478a1-bd0b-4735-ac44-be87620e3fa4", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1", Pod:"coredns-7c65d6cfc9-g798j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc7ccb856dd", MAC:"fa:17:2c:39:d5:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:03.601425 containerd[1583]: 2025-09-10 05:20:03.596 [INFO][4996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g798j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--g798j-eth0" Sep 10 05:20:03.647761 systemd-networkd[1509]: cali5653509eb60: Link UP Sep 10 05:20:03.648108 systemd-networkd[1509]: cali5653509eb60: Gained carrier Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.421 [INFO][5006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0 calico-apiserver-5b879dbc57- calico-apiserver 04d2b8bc-38c8-44b4-929b-263f46e6af1a 829 0 2025-09-10 05:19:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b879dbc57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b879dbc57-7zs56 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5653509eb60 [] [] }} ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.422 [INFO][5006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.453 [INFO][5036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" HandleID="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Workload="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.453 [INFO][5036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" HandleID="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Workload="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b879dbc57-7zs56", "timestamp":"2025-09-10 05:20:03.453084452 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.453 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.539 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.539 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.592 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.599 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.603 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.606 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.610 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.610 [INFO][5036] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.612 [INFO][5036] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.631 [INFO][5036] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.640 [INFO][5036] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.640 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" host="localhost" Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.640 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 05:20:03.664623 containerd[1583]: 2025-09-10 05:20:03.640 [INFO][5036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" HandleID="k8s-pod-network.307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Workload="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" Sep 10 05:20:03.665154 containerd[1583]: 2025-09-10 05:20:03.644 [INFO][5006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0", GenerateName:"calico-apiserver-5b879dbc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"04d2b8bc-38c8-44b4-929b-263f46e6af1a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b879dbc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b879dbc57-7zs56", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5653509eb60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:03.665154 containerd[1583]: 2025-09-10 05:20:03.644 [INFO][5006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" Sep 10 05:20:03.665154 containerd[1583]: 2025-09-10 05:20:03.644 [INFO][5006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5653509eb60 ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" Sep 10 05:20:03.665154 containerd[1583]: 2025-09-10 05:20:03.647 [INFO][5006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" Sep 10 05:20:03.665154 containerd[1583]: 2025-09-10 05:20:03.648 [INFO][5006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0", GenerateName:"calico-apiserver-5b879dbc57-", Namespace:"calico-apiserver", SelfLink:"", UID:"04d2b8bc-38c8-44b4-929b-263f46e6af1a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 5, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b879dbc57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc", Pod:"calico-apiserver-5b879dbc57-7zs56", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5653509eb60", MAC:"de:81:3e:30:ab:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 05:20:03.665154 containerd[1583]: 2025-09-10 05:20:03.658 [INFO][5006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" Namespace="calico-apiserver" Pod="calico-apiserver-5b879dbc57-7zs56" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b879dbc57--7zs56-eth0" Sep 10 05:20:03.672519 containerd[1583]: time="2025-09-10T05:20:03.672463462Z" level=info msg="connecting to shim ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1" address="unix:///run/containerd/s/18bc4c32da8d0babca81dd2ff81ef3fb33ede0dfeaa28b315fb1ad4578d7aaa1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:20:03.701248 systemd[1]: Started cri-containerd-ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1.scope - libcontainer container ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1. Sep 10 05:20:03.709951 containerd[1583]: time="2025-09-10T05:20:03.709894544Z" level=info msg="connecting to shim 307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc" address="unix:///run/containerd/s/c97b8419ab47024804f151b6957b7ace6c66391d96c13287ccd511f29e9aed58" namespace=k8s.io protocol=ttrpc version=3 Sep 10 05:20:03.733618 systemd[1]: Started cri-containerd-307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc.scope - libcontainer container 307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc. Sep 10 05:20:03.737972 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:20:03.752125 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 05:20:03.785540 containerd[1583]: time="2025-09-10T05:20:03.785501344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g798j,Uid:059478a1-bd0b-4735-ac44-be87620e3fa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1\"" Sep 10 05:20:03.786790 kubelet[2726]: E0910 05:20:03.786687 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:03.809531 containerd[1583]: time="2025-09-10T05:20:03.809347494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b879dbc57-7zs56,Uid:04d2b8bc-38c8-44b4-929b-263f46e6af1a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc\"" Sep 10 05:20:03.810084 containerd[1583]: time="2025-09-10T05:20:03.810000258Z" level=info msg="CreateContainer within sandbox \"ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 05:20:03.813693 containerd[1583]: time="2025-09-10T05:20:03.813606252Z" level=info msg="CreateContainer within sandbox \"307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 05:20:03.823230 containerd[1583]: time="2025-09-10T05:20:03.823192975Z" level=info msg="Container 577608635109f81be8a15a5e50f2edace3d7362cc6810d3d9baeb239c4e6a51a: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:03.830816 containerd[1583]: time="2025-09-10T05:20:03.830776349Z" level=info msg="Container c12266636fb23701b2a7077faff91eece52c6374fc8a160e203f21358f77c2ec: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:03.833701 containerd[1583]: time="2025-09-10T05:20:03.833672872Z" level=info msg="CreateContainer within sandbox \"ab7ac1af8e301cb0d6a6cb33eea1bcfa65efd16886bd6cbb8a217a3032c8a9e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"577608635109f81be8a15a5e50f2edace3d7362cc6810d3d9baeb239c4e6a51a\"" Sep 10 05:20:03.837827 containerd[1583]: time="2025-09-10T05:20:03.837772452Z" level=info msg="StartContainer for \"577608635109f81be8a15a5e50f2edace3d7362cc6810d3d9baeb239c4e6a51a\"" Sep 10 05:20:03.838825 containerd[1583]: time="2025-09-10T05:20:03.838788708Z" level=info msg="CreateContainer within sandbox \"307d17e016daa8a60a7d887bb779c61a46d291b33fb44e8971a500ed555376dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c12266636fb23701b2a7077faff91eece52c6374fc8a160e203f21358f77c2ec\"" Sep 10 05:20:03.839906 containerd[1583]: time="2025-09-10T05:20:03.839806257Z" level=info msg="StartContainer for \"c12266636fb23701b2a7077faff91eece52c6374fc8a160e203f21358f77c2ec\"" Sep 10 05:20:03.840382 containerd[1583]: time="2025-09-10T05:20:03.840359765Z" level=info msg="connecting to shim 577608635109f81be8a15a5e50f2edace3d7362cc6810d3d9baeb239c4e6a51a" address="unix:///run/containerd/s/18bc4c32da8d0babca81dd2ff81ef3fb33ede0dfeaa28b315fb1ad4578d7aaa1" protocol=ttrpc version=3 Sep 10 05:20:03.841265 containerd[1583]: time="2025-09-10T05:20:03.841215681Z" level=info msg="connecting to shim c12266636fb23701b2a7077faff91eece52c6374fc8a160e203f21358f77c2ec" address="unix:///run/containerd/s/c97b8419ab47024804f151b6957b7ace6c66391d96c13287ccd511f29e9aed58" protocol=ttrpc version=3 Sep 10 05:20:03.864625 systemd[1]: Started cri-containerd-577608635109f81be8a15a5e50f2edace3d7362cc6810d3d9baeb239c4e6a51a.scope - libcontainer container 577608635109f81be8a15a5e50f2edace3d7362cc6810d3d9baeb239c4e6a51a. Sep 10 05:20:03.872591 systemd[1]: Started cri-containerd-c12266636fb23701b2a7077faff91eece52c6374fc8a160e203f21358f77c2ec.scope - libcontainer container c12266636fb23701b2a7077faff91eece52c6374fc8a160e203f21358f77c2ec. Sep 10 05:20:03.912879 containerd[1583]: time="2025-09-10T05:20:03.912663719Z" level=info msg="StartContainer for \"577608635109f81be8a15a5e50f2edace3d7362cc6810d3d9baeb239c4e6a51a\" returns successfully" Sep 10 05:20:03.937432 containerd[1583]: time="2025-09-10T05:20:03.935753611Z" level=info msg="StartContainer for \"c12266636fb23701b2a7077faff91eece52c6374fc8a160e203f21358f77c2ec\" returns successfully" Sep 10 05:20:04.195829 containerd[1583]: time="2025-09-10T05:20:04.195688400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:04.196673 containerd[1583]: time="2025-09-10T05:20:04.196623494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 10 05:20:04.198452 containerd[1583]: time="2025-09-10T05:20:04.198420074Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:04.201274 containerd[1583]: time="2025-09-10T05:20:04.201240645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:04.202058 containerd[1583]: time="2025-09-10T05:20:04.202019476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.609057767s" Sep 10 05:20:04.202138 containerd[1583]: time="2025-09-10T05:20:04.202062095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 10 05:20:04.205030 containerd[1583]: time="2025-09-10T05:20:04.204958198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 05:20:04.211623 containerd[1583]: time="2025-09-10T05:20:04.211273533Z" level=info msg="CreateContainer within sandbox \"4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 05:20:04.221419 containerd[1583]: time="2025-09-10T05:20:04.221388045Z" level=info msg="Container 60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:04.231950 containerd[1583]: time="2025-09-10T05:20:04.231899342Z" level=info msg="CreateContainer within sandbox \"4ab9eca9c4b459a41c81dacbe643fb9e1e847d5cec4606b134a4d09d437d1b1a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54\"" Sep 10 05:20:04.232761 containerd[1583]: time="2025-09-10T05:20:04.232717597Z" level=info msg="StartContainer for \"60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54\"" Sep 10 05:20:04.233982 containerd[1583]: time="2025-09-10T05:20:04.233957272Z" level=info msg="connecting to shim 60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54" address="unix:///run/containerd/s/85ed95412c9f5f86d0521c2f4b3e60d2547c7bac57ae602b343d0b3e9d0571ad" protocol=ttrpc version=3 Sep 10 05:20:04.271678 systemd[1]: Started cri-containerd-60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54.scope - libcontainer container 60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54. Sep 10 05:20:04.325859 containerd[1583]: time="2025-09-10T05:20:04.325807258Z" level=info msg="StartContainer for \"60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54\" returns successfully" Sep 10 05:20:04.442677 kubelet[2726]: E0910 05:20:04.442537 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:04.493963 containerd[1583]: time="2025-09-10T05:20:04.493830174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54\" id:\"34e8f2a7999a1608c9e1cfa965d4bc16060df853a4a584257883b39672e23577\" pid:5290 exit_status:1 exited_at:{seconds:1757481604 nanos:484014984}" Sep 10 05:20:04.528084 containerd[1583]: time="2025-09-10T05:20:04.528037638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54\" id:\"f498829d6d7f89bdf70651aa0a37ac822c30dbc61ea3678fc568eaa4bbc1cfd0\" pid:5313 exit_status:1 exited_at:{seconds:1757481604 nanos:527823777}" Sep 10 05:20:04.725455 kubelet[2726]: I0910 05:20:04.725387 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b879dbc57-7zs56" podStartSLOduration=50.725368588 podStartE2EDuration="50.725368588s" podCreationTimestamp="2025-09-10 05:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:20:04.724400923 +0000 UTC m=+65.793208965" watchObservedRunningTime="2025-09-10 05:20:04.725368588 +0000 UTC m=+65.794176630" Sep 10 05:20:04.752959 kubelet[2726]: I0910 05:20:04.752708 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8674dc7db6-njx5w" podStartSLOduration=38.784310643 podStartE2EDuration="47.752684485s" podCreationTimestamp="2025-09-10 05:19:17 +0000 UTC" firstStartedPulling="2025-09-10 05:19:55.235899 +0000 UTC m=+56.304707042" lastFinishedPulling="2025-09-10 05:20:04.204272842 +0000 UTC m=+65.273080884" observedRunningTime="2025-09-10 05:20:04.737772975 +0000 UTC m=+65.806581017" watchObservedRunningTime="2025-09-10 05:20:04.752684485 +0000 UTC m=+65.821492527" Sep 10 05:20:04.753968 kubelet[2726]: I0910 05:20:04.753774 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g798j" podStartSLOduration=59.753765303 podStartE2EDuration="59.753765303s" podCreationTimestamp="2025-09-10 05:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 05:20:04.751970195 +0000 UTC m=+65.820778247" watchObservedRunningTime="2025-09-10 05:20:04.753765303 +0000 UTC m=+65.822573345" Sep 10 05:20:05.290640 systemd-networkd[1509]: calidc7ccb856dd: Gained IPv6LL Sep 10 05:20:05.354734 systemd-networkd[1509]: cali5653509eb60: Gained IPv6LL Sep 10 05:20:05.444302 kubelet[2726]: I0910 05:20:05.444269 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 05:20:05.444863 kubelet[2726]: E0910 05:20:05.444654 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:05.486650 containerd[1583]: time="2025-09-10T05:20:05.486598359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54\" id:\"6f97f44a40f2c3822a7a91b2e5d35b24c68b346b3e27f5fa7173b8cba2478b37\" pid:5345 exited_at:{seconds:1757481605 nanos:486283488}" Sep 10 05:20:06.446678 kubelet[2726]: E0910 05:20:06.446583 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:07.265374 containerd[1583]: time="2025-09-10T05:20:07.265305272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:07.266594 containerd[1583]: time="2025-09-10T05:20:07.266521881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 10 05:20:07.267830 containerd[1583]: time="2025-09-10T05:20:07.267797125Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:07.271072 containerd[1583]: time="2025-09-10T05:20:07.271044443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:07.271605 containerd[1583]: time="2025-09-10T05:20:07.271579251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.066548787s" Sep 10 05:20:07.271651 containerd[1583]: time="2025-09-10T05:20:07.271604429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 10 05:20:07.272557 containerd[1583]: time="2025-09-10T05:20:07.272408329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 05:20:07.273946 containerd[1583]: time="2025-09-10T05:20:07.273901996Z" level=info msg="CreateContainer within sandbox \"94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 05:20:07.284525 containerd[1583]: time="2025-09-10T05:20:07.283464814Z" level=info msg="Container 08bae89ee118300a4637158ed5c532c8b1b4d583d54750c59de0479f66f01fcb: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:07.292071 containerd[1583]: time="2025-09-10T05:20:07.292014457Z" level=info msg="CreateContainer within sandbox \"94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"08bae89ee118300a4637158ed5c532c8b1b4d583d54750c59de0479f66f01fcb\"" Sep 10 05:20:07.292630 containerd[1583]: time="2025-09-10T05:20:07.292580465Z" level=info msg="StartContainer for \"08bae89ee118300a4637158ed5c532c8b1b4d583d54750c59de0479f66f01fcb\"" Sep 10 05:20:07.294101 containerd[1583]: time="2025-09-10T05:20:07.294050977Z" level=info msg="connecting to shim 08bae89ee118300a4637158ed5c532c8b1b4d583d54750c59de0479f66f01fcb" address="unix:///run/containerd/s/b1efcc94f7b14c87d0e1a8d94c8194e7ce3c153b0d08fddb6e819f60b6020ba6" protocol=ttrpc version=3 Sep 10 05:20:07.318658 systemd[1]: Started cri-containerd-08bae89ee118300a4637158ed5c532c8b1b4d583d54750c59de0479f66f01fcb.scope - libcontainer container 08bae89ee118300a4637158ed5c532c8b1b4d583d54750c59de0479f66f01fcb. Sep 10 05:20:07.372644 containerd[1583]: time="2025-09-10T05:20:07.372591005Z" level=info msg="StartContainer for \"08bae89ee118300a4637158ed5c532c8b1b4d583d54750c59de0479f66f01fcb\" returns successfully" Sep 10 05:20:07.967196 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:42526.service - OpenSSH per-connection server daemon (10.0.0.1:42526). Sep 10 05:20:08.092436 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 42526 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:08.094720 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:08.099130 systemd-logind[1563]: New session 13 of user core. Sep 10 05:20:08.108620 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 05:20:08.250843 sshd[5402]: Connection closed by 10.0.0.1 port 42526 Sep 10 05:20:08.251245 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:08.261210 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:42526.service: Deactivated successfully. Sep 10 05:20:08.263324 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 05:20:08.264285 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Sep 10 05:20:08.266904 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:42536.service - OpenSSH per-connection server daemon (10.0.0.1:42536). Sep 10 05:20:08.268469 systemd-logind[1563]: Removed session 13. Sep 10 05:20:08.318740 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 42536 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:08.320156 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:08.324065 systemd-logind[1563]: New session 14 of user core. Sep 10 05:20:08.334608 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 05:20:08.579510 sshd[5419]: Connection closed by 10.0.0.1 port 42536 Sep 10 05:20:08.579829 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:08.593185 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:42536.service: Deactivated successfully. Sep 10 05:20:08.594952 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 05:20:08.595781 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Sep 10 05:20:08.598047 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:42550.service - OpenSSH per-connection server daemon (10.0.0.1:42550). Sep 10 05:20:08.598907 systemd-logind[1563]: Removed session 14. Sep 10 05:20:08.646384 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 42550 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:08.647934 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:08.652197 systemd-logind[1563]: New session 15 of user core. Sep 10 05:20:08.669611 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 05:20:08.897519 sshd[5433]: Connection closed by 10.0.0.1 port 42550 Sep 10 05:20:08.897800 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:08.901942 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:42550.service: Deactivated successfully. Sep 10 05:20:08.903964 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 05:20:08.904876 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Sep 10 05:20:08.906243 systemd-logind[1563]: Removed session 15. Sep 10 05:20:11.487348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622084090.mount: Deactivated successfully. Sep 10 05:20:12.058516 kubelet[2726]: E0910 05:20:12.058430 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:12.933429 kubelet[2726]: I0910 05:20:12.933374 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 05:20:13.260014 kubelet[2726]: I0910 05:20:13.259957 2726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 05:20:13.496291 containerd[1583]: time="2025-09-10T05:20:13.496229784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:13.513649 containerd[1583]: time="2025-09-10T05:20:13.513551870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 10 05:20:13.537632 containerd[1583]: time="2025-09-10T05:20:13.536904013Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:13.565536 containerd[1583]: time="2025-09-10T05:20:13.565457904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:13.566440 containerd[1583]: time="2025-09-10T05:20:13.566408217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.293976223s" Sep 10 05:20:13.566440 containerd[1583]: time="2025-09-10T05:20:13.566437123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 10 05:20:13.567358 containerd[1583]: time="2025-09-10T05:20:13.567327893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 05:20:13.569048 containerd[1583]: time="2025-09-10T05:20:13.569013455Z" level=info msg="CreateContainer within sandbox \"8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 05:20:13.653544 containerd[1583]: time="2025-09-10T05:20:13.650497415Z" level=info msg="Container f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:13.657097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516376009.mount: Deactivated successfully. Sep 10 05:20:13.664802 containerd[1583]: time="2025-09-10T05:20:13.664757814Z" level=info msg="CreateContainer within sandbox \"8bdccab209bc4bd5e8f6fd1a42af81cbcbd522c71bad3d76bed76b06eca58efa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90\"" Sep 10 05:20:13.665533 containerd[1583]: time="2025-09-10T05:20:13.665466491Z" level=info msg="StartContainer for \"f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90\"" Sep 10 05:20:13.666794 containerd[1583]: time="2025-09-10T05:20:13.666763684Z" level=info msg="connecting to shim f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90" address="unix:///run/containerd/s/43b800b81f75552c33986e013f9d878b7d05bb293df6b55478857e0b38305871" protocol=ttrpc version=3 Sep 10 05:20:13.726709 systemd[1]: Started cri-containerd-f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90.scope - libcontainer container f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90. Sep 10 05:20:13.910045 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:57752.service - OpenSSH per-connection server daemon (10.0.0.1:57752). Sep 10 05:20:13.914611 containerd[1583]: time="2025-09-10T05:20:13.914573425Z" level=info msg="StartContainer for \"f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90\" returns successfully" Sep 10 05:20:13.991465 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 57752 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:13.993030 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:13.997647 systemd-logind[1563]: New session 16 of user core. Sep 10 05:20:14.007616 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 05:20:14.143477 sshd[5520]: Connection closed by 10.0.0.1 port 57752 Sep 10 05:20:14.143832 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:14.147034 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:57752.service: Deactivated successfully. Sep 10 05:20:14.149022 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 05:20:14.151008 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Sep 10 05:20:14.151974 systemd-logind[1563]: Removed session 16. Sep 10 05:20:14.496197 kubelet[2726]: I0910 05:20:14.496121 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-262kf" podStartSLOduration=45.262043879 podStartE2EDuration="58.496093446s" podCreationTimestamp="2025-09-10 05:19:16 +0000 UTC" firstStartedPulling="2025-09-10 05:20:00.333158935 +0000 UTC m=+61.401966977" lastFinishedPulling="2025-09-10 05:20:13.567208512 +0000 UTC m=+74.636016544" observedRunningTime="2025-09-10 05:20:14.494988396 +0000 UTC m=+75.563796438" watchObservedRunningTime="2025-09-10 05:20:14.496093446 +0000 UTC m=+75.564901488" Sep 10 05:20:14.564043 containerd[1583]: time="2025-09-10T05:20:14.563985729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90\" id:\"087a11aba1eb220a357fd43e8cfa3749a721b59822ca004defd8419631a6e435\" pid:5546 exit_status:1 exited_at:{seconds:1757481614 nanos:563579255}" Sep 10 05:20:15.566977 containerd[1583]: time="2025-09-10T05:20:15.566927894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90\" id:\"0b6171b6c3f65761360ba822db374bf3c264d47959b3889506be821367c59475\" pid:5571 exit_status:1 exited_at:{seconds:1757481615 nanos:566569243}" Sep 10 05:20:15.837949 containerd[1583]: time="2025-09-10T05:20:15.837790503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:15.839311 containerd[1583]: time="2025-09-10T05:20:15.839274532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 10 05:20:15.840898 containerd[1583]: time="2025-09-10T05:20:15.840837604Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:15.844524 containerd[1583]: time="2025-09-10T05:20:15.843800282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:15.844684 containerd[1583]: time="2025-09-10T05:20:15.844642015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.277286789s" Sep 10 05:20:15.844684 containerd[1583]: time="2025-09-10T05:20:15.844680338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 10 05:20:15.846012 containerd[1583]: time="2025-09-10T05:20:15.845739148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 05:20:15.848613 containerd[1583]: time="2025-09-10T05:20:15.848578730Z" level=info msg="CreateContainer within sandbox \"1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 05:20:15.859819 containerd[1583]: time="2025-09-10T05:20:15.859705529Z" level=info msg="Container 1f7b3e2e0b4ae2291ac337576ad282919f19d9de118026234f607921bd209240: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:15.892641 containerd[1583]: time="2025-09-10T05:20:15.892591303Z" level=info msg="CreateContainer within sandbox \"1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1f7b3e2e0b4ae2291ac337576ad282919f19d9de118026234f607921bd209240\"" Sep 10 05:20:15.893886 containerd[1583]: time="2025-09-10T05:20:15.893823498Z" level=info msg="StartContainer for \"1f7b3e2e0b4ae2291ac337576ad282919f19d9de118026234f607921bd209240\"" Sep 10 05:20:15.895380 containerd[1583]: time="2025-09-10T05:20:15.895355920Z" level=info msg="connecting to shim 1f7b3e2e0b4ae2291ac337576ad282919f19d9de118026234f607921bd209240" address="unix:///run/containerd/s/676741d46b3b5a0558130cae4d8bbf845bad565ac46ef6db6f954e8efa256372" protocol=ttrpc version=3 Sep 10 05:20:15.922646 systemd[1]: Started cri-containerd-1f7b3e2e0b4ae2291ac337576ad282919f19d9de118026234f607921bd209240.scope - libcontainer container 1f7b3e2e0b4ae2291ac337576ad282919f19d9de118026234f607921bd209240. Sep 10 05:20:15.970400 containerd[1583]: time="2025-09-10T05:20:15.970361902Z" level=info msg="StartContainer for \"1f7b3e2e0b4ae2291ac337576ad282919f19d9de118026234f607921bd209240\" returns successfully" Sep 10 05:20:18.050470 kubelet[2726]: E0910 05:20:18.050428 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:18.709466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4071283710.mount: Deactivated successfully. Sep 10 05:20:18.741163 containerd[1583]: time="2025-09-10T05:20:18.741080213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:18.742103 containerd[1583]: time="2025-09-10T05:20:18.742055078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 10 05:20:18.743107 containerd[1583]: time="2025-09-10T05:20:18.743056884Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:18.746007 containerd[1583]: time="2025-09-10T05:20:18.745948894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:18.746545 containerd[1583]: time="2025-09-10T05:20:18.746518399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.900746428s" Sep 10 05:20:18.746604 containerd[1583]: time="2025-09-10T05:20:18.746550682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 10 05:20:18.748778 containerd[1583]: time="2025-09-10T05:20:18.747856873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 05:20:18.749545 containerd[1583]: time="2025-09-10T05:20:18.749321529Z" level=info msg="CreateContainer within sandbox \"94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 05:20:18.761132 containerd[1583]: time="2025-09-10T05:20:18.759062549Z" level=info msg="Container 1a275e58c0ad2547452031e86e87a4c851b62913ab91b92cbea2d29ce23b5361: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:18.773637 containerd[1583]: time="2025-09-10T05:20:18.773530447Z" level=info msg="CreateContainer within sandbox \"94673bc9e4a3e72e0e3b1cebda215a72f02de21fa27ff1a669f0444e2a3e6d65\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1a275e58c0ad2547452031e86e87a4c851b62913ab91b92cbea2d29ce23b5361\"" Sep 10 05:20:18.776390 containerd[1583]: time="2025-09-10T05:20:18.776110879Z" level=info msg="StartContainer for \"1a275e58c0ad2547452031e86e87a4c851b62913ab91b92cbea2d29ce23b5361\"" Sep 10 05:20:18.779611 containerd[1583]: time="2025-09-10T05:20:18.779241780Z" level=info msg="connecting to shim 1a275e58c0ad2547452031e86e87a4c851b62913ab91b92cbea2d29ce23b5361" address="unix:///run/containerd/s/b1efcc94f7b14c87d0e1a8d94c8194e7ce3c153b0d08fddb6e819f60b6020ba6" protocol=ttrpc version=3 Sep 10 05:20:18.806757 systemd[1]: Started cri-containerd-1a275e58c0ad2547452031e86e87a4c851b62913ab91b92cbea2d29ce23b5361.scope - libcontainer container 1a275e58c0ad2547452031e86e87a4c851b62913ab91b92cbea2d29ce23b5361. Sep 10 05:20:18.864854 containerd[1583]: time="2025-09-10T05:20:18.864795480Z" level=info msg="StartContainer for \"1a275e58c0ad2547452031e86e87a4c851b62913ab91b92cbea2d29ce23b5361\" returns successfully" Sep 10 05:20:19.156429 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:57766.service - OpenSSH per-connection server daemon (10.0.0.1:57766). Sep 10 05:20:19.275077 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 57766 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:19.277279 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:19.282632 systemd-logind[1563]: New session 17 of user core. Sep 10 05:20:19.291694 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 05:20:19.526945 sshd[5662]: Connection closed by 10.0.0.1 port 57766 Sep 10 05:20:19.527541 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:19.533006 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:57766.service: Deactivated successfully. Sep 10 05:20:19.535158 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 05:20:19.536060 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Sep 10 05:20:19.537439 systemd-logind[1563]: Removed session 17. Sep 10 05:20:21.060756 containerd[1583]: time="2025-09-10T05:20:21.060707245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:21.061635 containerd[1583]: time="2025-09-10T05:20:21.061613915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 10 05:20:21.063166 containerd[1583]: time="2025-09-10T05:20:21.063127691Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:21.065150 containerd[1583]: time="2025-09-10T05:20:21.065121757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 05:20:21.065787 containerd[1583]: time="2025-09-10T05:20:21.065742388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.317858443s" Sep 10 05:20:21.065824 containerd[1583]: time="2025-09-10T05:20:21.065788947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 10 05:20:21.068480 containerd[1583]: time="2025-09-10T05:20:21.068452169Z" level=info msg="CreateContainer within sandbox \"1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 05:20:21.078232 containerd[1583]: time="2025-09-10T05:20:21.077662535Z" level=info msg="Container 38e9a9201d1321bd5191d29a52b57ee77bf13036fb82778ffdbccf06871d0c42: CDI devices from CRI Config.CDIDevices: []" Sep 10 05:20:21.089504 containerd[1583]: time="2025-09-10T05:20:21.089432733Z" level=info msg="CreateContainer within sandbox \"1c7ba80d9454dee2f715d5cf6587bf2bdef758b8394599632b2a27fd01e191cf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"38e9a9201d1321bd5191d29a52b57ee77bf13036fb82778ffdbccf06871d0c42\"" Sep 10 05:20:21.089987 containerd[1583]: time="2025-09-10T05:20:21.089967540Z" level=info msg="StartContainer for \"38e9a9201d1321bd5191d29a52b57ee77bf13036fb82778ffdbccf06871d0c42\"" Sep 10 05:20:21.091516 containerd[1583]: time="2025-09-10T05:20:21.091337910Z" level=info msg="connecting to shim 38e9a9201d1321bd5191d29a52b57ee77bf13036fb82778ffdbccf06871d0c42" address="unix:///run/containerd/s/676741d46b3b5a0558130cae4d8bbf845bad565ac46ef6db6f954e8efa256372" protocol=ttrpc version=3 Sep 10 05:20:21.117688 systemd[1]: Started cri-containerd-38e9a9201d1321bd5191d29a52b57ee77bf13036fb82778ffdbccf06871d0c42.scope - libcontainer container 38e9a9201d1321bd5191d29a52b57ee77bf13036fb82778ffdbccf06871d0c42. Sep 10 05:20:21.171445 containerd[1583]: time="2025-09-10T05:20:21.171228969Z" level=info msg="StartContainer for \"38e9a9201d1321bd5191d29a52b57ee77bf13036fb82778ffdbccf06871d0c42\" returns successfully" Sep 10 05:20:21.517374 kubelet[2726]: I0910 05:20:21.517290 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d74897c75-44fkv" podStartSLOduration=10.805426576 podStartE2EDuration="33.517270314s" podCreationTimestamp="2025-09-10 05:19:48 +0000 UTC" firstStartedPulling="2025-09-10 05:19:56.035744258 +0000 UTC m=+57.104552300" lastFinishedPulling="2025-09-10 05:20:18.747587996 +0000 UTC m=+79.816396038" observedRunningTime="2025-09-10 05:20:19.512828279 +0000 UTC m=+80.581636321" watchObservedRunningTime="2025-09-10 05:20:21.517270314 +0000 UTC m=+82.586078356" Sep 10 05:20:21.518249 kubelet[2726]: I0910 05:20:21.517901 2726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6vhs9" podStartSLOduration=44.774475216 podStartE2EDuration="1m4.517892057s" podCreationTimestamp="2025-09-10 05:19:17 +0000 UTC" firstStartedPulling="2025-09-10 05:20:01.322927904 +0000 UTC m=+62.391735946" lastFinishedPulling="2025-09-10 05:20:21.066344745 +0000 UTC m=+82.135152787" observedRunningTime="2025-09-10 05:20:21.517068006 +0000 UTC m=+82.585876058" watchObservedRunningTime="2025-09-10 05:20:21.517892057 +0000 UTC m=+82.586700099" Sep 10 05:20:22.133832 kubelet[2726]: I0910 05:20:22.133786 2726 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 05:20:22.133832 kubelet[2726]: I0910 05:20:22.133827 2726 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 05:20:24.049779 kubelet[2726]: E0910 05:20:24.049714 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:24.539248 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:51754.service - OpenSSH per-connection server daemon (10.0.0.1:51754). Sep 10 05:20:24.604215 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 51754 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:24.605888 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:24.610217 systemd-logind[1563]: New session 18 of user core. Sep 10 05:20:24.617625 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 05:20:24.783380 sshd[5719]: Connection closed by 10.0.0.1 port 51754 Sep 10 05:20:24.783726 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:24.787706 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:51754.service: Deactivated successfully. Sep 10 05:20:24.790051 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 05:20:24.790914 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Sep 10 05:20:24.792034 systemd-logind[1563]: Removed session 18. Sep 10 05:20:26.049885 kubelet[2726]: E0910 05:20:26.049833 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:29.796323 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:51764.service - OpenSSH per-connection server daemon (10.0.0.1:51764). Sep 10 05:20:29.850537 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 51764 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:29.851984 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:29.855933 systemd-logind[1563]: New session 19 of user core. Sep 10 05:20:29.866646 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 05:20:29.979695 sshd[5736]: Connection closed by 10.0.0.1 port 51764 Sep 10 05:20:29.980063 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:29.995365 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:51764.service: Deactivated successfully. Sep 10 05:20:29.997427 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 05:20:29.998596 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Sep 10 05:20:30.001716 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:52306.service - OpenSSH per-connection server daemon (10.0.0.1:52306). Sep 10 05:20:30.002556 systemd-logind[1563]: Removed session 19. Sep 10 05:20:30.058556 sshd[5756]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:30.059953 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:30.064617 systemd-logind[1563]: New session 20 of user core. Sep 10 05:20:30.073634 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 05:20:30.383952 sshd[5759]: Connection closed by 10.0.0.1 port 52306 Sep 10 05:20:30.385830 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:30.396512 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:52306.service: Deactivated successfully. Sep 10 05:20:30.398399 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 05:20:30.399158 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Sep 10 05:20:30.402246 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:52320.service - OpenSSH per-connection server daemon (10.0.0.1:52320). Sep 10 05:20:30.403324 systemd-logind[1563]: Removed session 20. Sep 10 05:20:30.473208 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 52320 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:30.474866 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:30.479611 systemd-logind[1563]: New session 21 of user core. Sep 10 05:20:30.491665 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 05:20:31.993718 containerd[1583]: time="2025-09-10T05:20:31.993647723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9951f972000d66c62c32f1a6521145aadc3eabc97007a37c88e44866e7ddef92\" id:\"d5a2b3010b489ff061c88ffde48baa0760954d2a304534ec1dc15905dff30a33\" pid:5820 exited_at:{seconds:1757481631 nanos:993073176}" Sep 10 05:20:32.456530 sshd[5774]: Connection closed by 10.0.0.1 port 52320 Sep 10 05:20:32.457016 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:32.475138 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:52320.service: Deactivated successfully. Sep 10 05:20:32.478811 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 05:20:32.480029 systemd[1]: session-21.scope: Consumed 650ms CPU time, 73M memory peak. Sep 10 05:20:32.483635 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Sep 10 05:20:32.488962 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:52334.service - OpenSSH per-connection server daemon (10.0.0.1:52334). Sep 10 05:20:32.492843 systemd-logind[1563]: Removed session 21. Sep 10 05:20:32.545408 sshd[5840]: Accepted publickey for core from 10.0.0.1 port 52334 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:32.547477 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:32.552679 systemd-logind[1563]: New session 22 of user core. Sep 10 05:20:32.562707 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 05:20:32.924196 sshd[5846]: Connection closed by 10.0.0.1 port 52334 Sep 10 05:20:32.924555 sshd-session[5840]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:32.934884 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:52334.service: Deactivated successfully. Sep 10 05:20:32.937128 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 05:20:32.938013 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Sep 10 05:20:32.941071 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:52348.service - OpenSSH per-connection server daemon (10.0.0.1:52348). Sep 10 05:20:32.941853 systemd-logind[1563]: Removed session 22. Sep 10 05:20:32.984430 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 52348 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:32.986178 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:32.990841 systemd-logind[1563]: New session 23 of user core. Sep 10 05:20:32.997627 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 05:20:33.108645 sshd[5861]: Connection closed by 10.0.0.1 port 52348 Sep 10 05:20:33.108979 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:33.113760 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:52348.service: Deactivated successfully. Sep 10 05:20:33.115873 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 05:20:33.116764 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Sep 10 05:20:33.118037 systemd-logind[1563]: Removed session 23. Sep 10 05:20:34.514220 containerd[1583]: time="2025-09-10T05:20:34.514171731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60b87c8755f89f1e6810f23194ddb3ec48ad1fd6a61ec238cea3eca94bacfb54\" id:\"3b6d6fe4b57cff291ee569a5969da8114cb212f2520ccb42aed9bac66982ecbe\" pid:5885 exited_at:{seconds:1757481634 nanos:513780134}" Sep 10 05:20:34.553737 containerd[1583]: time="2025-09-10T05:20:34.553684992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1955ba91c961c594fe0ac15bc0a370f4e2f6cf8e7a5d7ca0286350329e80f90\" id:\"960b3cb736995f5988f18c861838f95ef5c672c16fb0778ca8599a868764b917\" pid:5903 exited_at:{seconds:1757481634 nanos:553323874}" Sep 10 05:20:38.120953 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:52358.service - OpenSSH per-connection server daemon (10.0.0.1:52358). Sep 10 05:20:38.178414 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 52358 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:38.180140 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:38.185427 systemd-logind[1563]: New session 24 of user core. Sep 10 05:20:38.197623 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 05:20:38.317300 sshd[5927]: Connection closed by 10.0.0.1 port 52358 Sep 10 05:20:38.317810 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:38.323669 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:52358.service: Deactivated successfully. Sep 10 05:20:38.325728 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 05:20:38.326674 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Sep 10 05:20:38.327976 systemd-logind[1563]: Removed session 24. Sep 10 05:20:43.339857 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:43502.service - OpenSSH per-connection server daemon (10.0.0.1:43502). Sep 10 05:20:43.422733 sshd[5943]: Accepted publickey for core from 10.0.0.1 port 43502 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:43.425522 sshd-session[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:43.438577 systemd-logind[1563]: New session 25 of user core. Sep 10 05:20:43.442735 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 05:20:43.619166 sshd[5946]: Connection closed by 10.0.0.1 port 43502 Sep 10 05:20:43.619674 sshd-session[5943]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:43.623158 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Sep 10 05:20:43.623359 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:43502.service: Deactivated successfully. Sep 10 05:20:43.625111 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 05:20:43.626749 systemd-logind[1563]: Removed session 25. Sep 10 05:20:48.635861 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:43518.service - OpenSSH per-connection server daemon (10.0.0.1:43518). Sep 10 05:20:48.689331 sshd[5961]: Accepted publickey for core from 10.0.0.1 port 43518 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:48.690680 sshd-session[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:48.694975 systemd-logind[1563]: New session 26 of user core. Sep 10 05:20:48.707629 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 05:20:48.850078 sshd[5964]: Connection closed by 10.0.0.1 port 43518 Sep 10 05:20:48.850430 sshd-session[5961]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:48.854987 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:43518.service: Deactivated successfully. Sep 10 05:20:48.857264 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 05:20:48.858066 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Sep 10 05:20:48.860050 systemd-logind[1563]: Removed session 26. Sep 10 05:20:50.049373 kubelet[2726]: E0910 05:20:50.049320 2726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 05:20:53.862752 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:44844.service - OpenSSH per-connection server daemon (10.0.0.1:44844). Sep 10 05:20:53.935519 sshd[5977]: Accepted publickey for core from 10.0.0.1 port 44844 ssh2: RSA SHA256:xFt+dOmyy2YR8o+P2dynd8JL5xda9QRs1QGDAPqy5RA Sep 10 05:20:53.939650 sshd-session[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 05:20:53.954688 systemd-logind[1563]: New session 27 of user core. Sep 10 05:20:53.960733 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 05:20:54.209055 sshd[5980]: Connection closed by 10.0.0.1 port 44844 Sep 10 05:20:54.209287 sshd-session[5977]: pam_unix(sshd:session): session closed for user core Sep 10 05:20:54.216148 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:44844.service: Deactivated successfully. Sep 10 05:20:54.220037 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 05:20:54.222577 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Sep 10 05:20:54.223768 systemd-logind[1563]: Removed session 27.