Sep 13 00:13:51.024521 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:13:51.024543 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:13:51.024554 kernel: BIOS-provided physical RAM map: Sep 13 00:13:51.024560 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:13:51.024567 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 13 00:13:51.024573 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 13 00:13:51.024580 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 13 00:13:51.024586 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 13 00:13:51.024593 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 13 00:13:51.024599 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 13 00:13:51.024608 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 13 00:13:51.024614 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 13 00:13:51.024625 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 13 00:13:51.024632 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 13 00:13:51.024643 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 13 00:13:51.024650 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 13 00:13:51.024659 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 13 00:13:51.024666 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 13 00:13:51.024673 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 13 00:13:51.024679 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:13:51.024686 kernel: NX (Execute Disable) protection: active Sep 13 00:13:51.024693 kernel: APIC: Static calls initialized Sep 13 00:13:51.024700 kernel: efi: EFI v2.7 by EDK II Sep 13 00:13:51.024706 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 13 00:13:51.024716 kernel: SMBIOS 2.8 present. Sep 13 00:13:51.024723 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 13 00:13:51.024730 kernel: Hypervisor detected: KVM Sep 13 00:13:51.024743 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:13:51.024752 kernel: kvm-clock: using sched offset of 5417405693 cycles Sep 13 00:13:51.024760 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:13:51.024767 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:13:51.024774 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:13:51.024782 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:13:51.024789 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 13 00:13:51.024796 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:13:51.024803 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:13:51.024813 kernel: Using GB pages for direct mapping Sep 13 00:13:51.024820 kernel: Secure boot disabled Sep 13 00:13:51.024827 kernel: ACPI: Early table checksum verification disabled Sep 13 00:13:51.024835 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 13 00:13:51.024845 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:13:51.024853 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:13:51.024860 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:13:51.024871 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 13 00:13:51.024886 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:13:51.024899 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:13:51.024906 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:13:51.024914 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:13:51.024921 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:13:51.024928 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 13 00:13:51.024939 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 13 00:13:51.024946 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 13 00:13:51.024954 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 13 00:13:51.024962 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 13 00:13:51.024969 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 13 00:13:51.024976 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 13 00:13:51.024983 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 13 00:13:51.024991 kernel: No NUMA configuration found Sep 13 00:13:51.025001 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 13 00:13:51.025011 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 13 00:13:51.025019 kernel: Zone ranges: Sep 13 00:13:51.025050 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:13:51.025058 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 13 00:13:51.025065 kernel: Normal empty Sep 13 00:13:51.025073 kernel: Movable zone start for each node Sep 13 00:13:51.025080 kernel: Early memory node ranges Sep 13 00:13:51.025089 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:13:51.025097 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 13 00:13:51.025108 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 13 00:13:51.025115 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 13 00:13:51.025122 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 13 00:13:51.025129 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 13 00:13:51.025141 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 13 00:13:51.025149 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:13:51.025163 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:13:51.025176 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 13 00:13:51.025183 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:13:51.025190 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 13 00:13:51.025207 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:13:51.025218 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 13 00:13:51.025237 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:13:51.025245 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:13:51.025261 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:13:51.025279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:13:51.025295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:13:51.025302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:13:51.025315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:13:51.025330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:13:51.025338 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:13:51.025345 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:13:51.025353 kernel: TSC deadline timer available Sep 13 00:13:51.025360 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:13:51.025367 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:13:51.025374 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:13:51.025381 kernel: kvm-guest: setup PV sched yield Sep 13 00:13:51.025388 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:13:51.025399 kernel: Booting paravirtualized kernel on KVM Sep 13 00:13:51.025406 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:13:51.025419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:13:51.025427 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 13 00:13:51.025434 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 13 00:13:51.025444 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:13:51.025453 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:13:51.025460 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:13:51.025469 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:13:51.025483 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:13:51.025490 kernel: random: crng init done Sep 13 00:13:51.025498 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:13:51.025505 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:13:51.025512 kernel: Fallback order for Node 0: 0 Sep 13 00:13:51.025519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 13 00:13:51.025527 kernel: Policy zone: DMA32 Sep 13 00:13:51.025534 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:13:51.025545 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 166140K reserved, 0K cma-reserved) Sep 13 00:13:51.025552 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:13:51.025560 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:13:51.025567 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:13:51.025574 kernel: Dynamic Preempt: voluntary Sep 13 00:13:51.025589 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:13:51.025600 kernel: rcu: RCU event tracing is enabled. Sep 13 00:13:51.025608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:13:51.025616 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:13:51.025623 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:13:51.025631 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:13:51.025638 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:13:51.025649 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:13:51.025656 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:13:51.025666 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:13:51.025674 kernel: Console: colour dummy device 80x25 Sep 13 00:13:51.025682 kernel: printk: console [ttyS0] enabled Sep 13 00:13:51.025692 kernel: ACPI: Core revision 20230628 Sep 13 00:13:51.025700 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:13:51.025707 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:13:51.025715 kernel: x2apic enabled Sep 13 00:13:51.025723 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:13:51.025730 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:13:51.025738 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:13:51.025746 kernel: kvm-guest: setup PV IPIs Sep 13 00:13:51.025753 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:13:51.025764 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:13:51.025771 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:13:51.025779 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:13:51.025786 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:13:51.025794 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:13:51.025802 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:13:51.025809 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:13:51.025817 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:13:51.025824 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:13:51.025835 kernel: active return thunk: retbleed_return_thunk Sep 13 00:13:51.025843 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:13:51.025850 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:13:51.025858 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:13:51.025868 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:13:51.025877 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:13:51.025892 kernel: active return thunk: srso_return_thunk Sep 13 00:13:51.025899 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:13:51.025910 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:13:51.025917 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:13:51.025925 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:13:51.025932 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:13:51.025941 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:13:51.025948 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:13:51.025956 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:13:51.025964 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:13:51.025971 kernel: landlock: Up and running. Sep 13 00:13:51.025981 kernel: SELinux: Initializing. Sep 13 00:13:51.025989 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:13:51.025996 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:13:51.026004 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:13:51.026012 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:13:51.026019 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:13:51.026056 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:13:51.026064 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:13:51.026075 kernel: ... version: 0 Sep 13 00:13:51.026082 kernel: ... bit width: 48 Sep 13 00:13:51.026090 kernel: ... generic registers: 6 Sep 13 00:13:51.026097 kernel: ... value mask: 0000ffffffffffff Sep 13 00:13:51.026114 kernel: ... max period: 00007fffffffffff Sep 13 00:13:51.026124 kernel: ... fixed-purpose events: 0 Sep 13 00:13:51.026134 kernel: ... event mask: 000000000000003f Sep 13 00:13:51.026145 kernel: signal: max sigframe size: 1776 Sep 13 00:13:51.026154 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:13:51.026165 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:13:51.026180 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:13:51.026194 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:13:51.026205 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:13:51.026212 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:13:51.026222 kernel: smpboot: Max logical packages: 1 Sep 13 00:13:51.026230 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:13:51.026238 kernel: devtmpfs: initialized Sep 13 00:13:51.026245 kernel: x86/mm: Memory block size: 128MB Sep 13 00:13:51.026253 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 13 00:13:51.026264 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 13 00:13:51.026272 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 13 00:13:51.026280 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 13 00:13:51.026288 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 13 00:13:51.026305 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:13:51.026318 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:13:51.026335 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:13:51.026352 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:13:51.026360 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:13:51.026372 kernel: audit: type=2000 audit(1757722430.460:1): state=initialized audit_enabled=0 res=1 Sep 13 00:13:51.026382 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:13:51.026389 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:13:51.026397 kernel: cpuidle: using governor menu Sep 13 00:13:51.026404 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:13:51.026412 kernel: dca service started, version 1.12.1 Sep 13 00:13:51.026432 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:13:51.026441 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:13:51.026448 kernel: PCI: Using configuration type 1 for base access Sep 13 00:13:51.026470 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:13:51.026488 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:13:51.026510 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:13:51.026526 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:13:51.026534 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:13:51.026558 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:13:51.026599 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:13:51.026615 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:13:51.026632 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:13:51.027511 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:13:51.027519 kernel: ACPI: Interpreter enabled Sep 13 00:13:51.027546 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:13:51.027560 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:13:51.027568 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:13:51.027578 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:13:51.027585 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:13:51.027593 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:13:51.028806 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:13:51.030217 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:13:51.030370 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:13:51.030381 kernel: PCI host bridge to bus 0000:00 Sep 13 00:13:51.030544 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:13:51.030668 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:13:51.030791 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:13:51.030929 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:13:51.031089 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:13:51.031216 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 13 00:13:51.031341 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:13:51.031563 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:13:51.031735 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:13:51.031876 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 13 00:13:51.032603 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 13 00:13:51.032738 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:13:51.032870 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 13 00:13:51.033015 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:13:51.033215 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:13:51.033365 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 13 00:13:51.033537 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 13 00:13:51.033703 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 13 00:13:51.033899 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:13:51.034135 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 13 00:13:51.034331 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 13 00:13:51.034558 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 13 00:13:51.034967 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:13:51.035155 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 13 00:13:51.035286 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 13 00:13:51.035420 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 13 00:13:51.035549 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 13 00:13:51.035703 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:13:51.035837 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:13:51.036065 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:13:51.036228 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 13 00:13:51.036360 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 13 00:13:51.036565 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:13:51.036700 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 13 00:13:51.036715 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:13:51.036725 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:13:51.036733 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:13:51.036756 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:13:51.036764 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:13:51.036776 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:13:51.036790 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:13:51.036802 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:13:51.036815 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:13:51.036823 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:13:51.036830 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:13:51.036846 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:13:51.036863 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:13:51.036876 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:13:51.036892 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:13:51.036899 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:13:51.036907 kernel: iommu: Default domain type: Translated Sep 13 00:13:51.036915 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:13:51.036926 kernel: efivars: Registered efivars operations Sep 13 00:13:51.036935 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:13:51.036944 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:13:51.036955 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 13 00:13:51.036962 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 13 00:13:51.036970 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 13 00:13:51.036977 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 13 00:13:51.038207 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:13:51.038392 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:13:51.038525 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:13:51.038535 kernel: vgaarb: loaded Sep 13 00:13:51.038543 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:13:51.038561 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:13:51.038569 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:13:51.038577 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:13:51.038585 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:13:51.038593 kernel: pnp: PnP ACPI init Sep 13 00:13:51.038753 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:13:51.038764 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:13:51.038772 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:13:51.038784 kernel: NET: Registered PF_INET protocol family Sep 13 00:13:51.038792 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:13:51.038800 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:13:51.038808 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:13:51.038816 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:13:51.038824 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:13:51.038832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:13:51.038841 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:13:51.038851 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:13:51.038864 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:13:51.038872 kernel: NET: Registered PF_XDP protocol family Sep 13 00:13:51.039020 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 13 00:13:51.039187 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 13 00:13:51.039354 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:13:51.039483 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:13:51.039604 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:13:51.039723 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:13:51.039850 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:13:51.039983 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 13 00:13:51.039994 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:13:51.040002 kernel: Initialise system trusted keyrings Sep 13 00:13:51.040009 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:13:51.040017 kernel: Key type asymmetric registered Sep 13 00:13:51.040078 kernel: Asymmetric key parser 'x509' registered Sep 13 00:13:51.040090 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:13:51.040103 kernel: io scheduler mq-deadline registered Sep 13 00:13:51.040111 kernel: io scheduler kyber registered Sep 13 00:13:51.040119 kernel: io scheduler bfq registered Sep 13 00:13:51.040127 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:13:51.040136 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:13:51.040144 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:13:51.040154 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:13:51.040165 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:13:51.040175 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:13:51.040189 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:13:51.040201 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:13:51.040208 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:13:51.040216 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:13:51.040397 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:13:51.040527 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:13:51.040653 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:13:50 UTC (1757722430) Sep 13 00:13:51.040774 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:13:51.040790 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:13:51.040798 kernel: efifb: probing for efifb Sep 13 00:13:51.040805 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 13 00:13:51.040813 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 13 00:13:51.040821 kernel: efifb: scrolling: redraw Sep 13 00:13:51.040828 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 13 00:13:51.040836 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:13:51.040865 kernel: fb0: EFI VGA frame buffer device Sep 13 00:13:51.040875 kernel: pstore: Using crash dump compression: deflate Sep 13 00:13:51.040894 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:13:51.040902 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:13:51.040909 kernel: Segment Routing with IPv6 Sep 13 00:13:51.040917 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:13:51.040925 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:13:51.040933 kernel: Key type dns_resolver registered Sep 13 00:13:51.040941 kernel: IPI shorthand broadcast: enabled Sep 13 00:13:51.040950 kernel: sched_clock: Marking stable (1252002407, 130692538)->(1501785935, -119090990) Sep 13 00:13:51.040958 kernel: registered taskstats version 1 Sep 13 00:13:51.040969 kernel: Loading compiled-in X.509 certificates Sep 13 00:13:51.040977 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:13:51.040985 kernel: Key type .fscrypt registered Sep 13 00:13:51.040993 kernel: Key type fscrypt-provisioning registered Sep 13 00:13:51.041001 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:13:51.041009 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:13:51.041017 kernel: ima: No architecture policies found Sep 13 00:13:51.041025 kernel: clk: Disabling unused clocks Sep 13 00:13:51.041047 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:13:51.041059 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:13:51.041067 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:13:51.041074 kernel: Run /init as init process Sep 13 00:13:51.041082 kernel: with arguments: Sep 13 00:13:51.041090 kernel: /init Sep 13 00:13:51.041098 kernel: with environment: Sep 13 00:13:51.041105 kernel: HOME=/ Sep 13 00:13:51.041113 kernel: TERM=linux Sep 13 00:13:51.041121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:13:51.041134 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:13:51.041144 systemd[1]: Detected virtualization kvm. Sep 13 00:13:51.041153 systemd[1]: Detected architecture x86-64. Sep 13 00:13:51.041161 systemd[1]: Running in initrd. Sep 13 00:13:51.041175 systemd[1]: No hostname configured, using default hostname. Sep 13 00:13:51.041183 systemd[1]: Hostname set to . Sep 13 00:13:51.041192 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:13:51.041201 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:13:51.041209 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:13:51.041217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:13:51.041227 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:13:51.041236 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:13:51.041247 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:13:51.041256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:13:51.041266 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:13:51.041274 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:13:51.041283 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:13:51.041291 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:13:51.041300 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:13:51.041311 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:13:51.041320 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:13:51.041328 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:13:51.041337 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:13:51.041345 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:13:51.041354 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:13:51.041362 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:13:51.041370 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:13:51.041382 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:13:51.041390 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:13:51.041399 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:13:51.041407 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:13:51.041416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:13:51.041424 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:13:51.041432 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:13:51.041441 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:13:51.041449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:13:51.041461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:13:51.041469 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:13:51.041478 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:13:51.041487 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:13:51.041515 systemd-journald[192]: Collecting audit messages is disabled. Sep 13 00:13:51.041538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:13:51.041547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:13:51.041556 systemd-journald[192]: Journal started Sep 13 00:13:51.041577 systemd-journald[192]: Runtime Journal (/run/log/journal/1de8370ce96f44dab2357199c6b906cd) is 6.0M, max 48.3M, 42.2M free. Sep 13 00:13:51.036100 systemd-modules-load[193]: Inserted module 'overlay' Sep 13 00:13:51.043664 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:13:51.046584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:13:51.058329 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:13:51.061154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:13:51.064284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:13:51.072098 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:13:51.074875 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 13 00:13:51.075282 kernel: Bridge firewalling registered Sep 13 00:13:51.076734 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:13:51.079153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:13:51.081374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:13:51.093354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:13:51.097468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:13:51.100308 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:13:51.102485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:13:51.107415 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:13:51.120495 dracut-cmdline[227]: dracut-dracut-053 Sep 13 00:13:51.124862 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:13:51.148691 systemd-resolved[229]: Positive Trust Anchors: Sep 13 00:13:51.148709 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:13:51.148740 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:13:51.151794 systemd-resolved[229]: Defaulting to hostname 'linux'. Sep 13 00:13:51.153193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:13:51.158655 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:13:51.233091 kernel: SCSI subsystem initialized Sep 13 00:13:51.244073 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:13:51.256085 kernel: iscsi: registered transport (tcp) Sep 13 00:13:51.278075 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:13:51.278152 kernel: QLogic iSCSI HBA Driver Sep 13 00:13:51.338064 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:13:51.352223 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:13:51.384272 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:13:51.384338 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:13:51.385975 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:13:51.433098 kernel: raid6: avx2x4 gen() 27218 MB/s Sep 13 00:13:51.450075 kernel: raid6: avx2x2 gen() 26541 MB/s Sep 13 00:13:51.467155 kernel: raid6: avx2x1 gen() 24281 MB/s Sep 13 00:13:51.467248 kernel: raid6: using algorithm avx2x4 gen() 27218 MB/s Sep 13 00:13:51.485162 kernel: raid6: .... xor() 6925 MB/s, rmw enabled Sep 13 00:13:51.485268 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:13:51.507090 kernel: xor: automatically using best checksumming function avx Sep 13 00:13:51.709091 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:13:51.724958 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:13:51.734208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:13:51.747260 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 13 00:13:51.752212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:13:51.764217 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:13:51.779976 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 13 00:13:51.817542 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:13:51.838263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:13:51.911787 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:13:51.922327 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:13:51.941963 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:13:51.945553 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:13:51.948052 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:13:51.952146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:13:51.957112 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:13:51.959298 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:13:51.965834 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:13:51.971158 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:13:51.971190 kernel: GPT:9289727 != 19775487 Sep 13 00:13:51.971204 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:13:51.971237 kernel: GPT:9289727 != 19775487 Sep 13 00:13:51.971262 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:13:51.971277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:13:51.971290 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:13:51.980820 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:13:51.989077 kernel: libata version 3.00 loaded. Sep 13 00:13:51.993290 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:13:51.993610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:13:52.006333 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:13:52.014018 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:13:52.014059 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:13:52.017247 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:13:52.017408 kernel: scsi host0: ahci Sep 13 00:13:52.017575 kernel: scsi host1: ahci Sep 13 00:13:52.017738 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:13:52.017750 kernel: AES CTR mode by8 optimization enabled Sep 13 00:13:51.996090 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:13:51.998106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:13:51.998284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:13:52.000137 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:13:52.025271 kernel: scsi host2: ahci Sep 13 00:13:52.025498 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (459) Sep 13 00:13:52.012318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:13:52.036346 kernel: scsi host3: ahci Sep 13 00:13:52.036600 kernel: scsi host4: ahci Sep 13 00:13:52.036206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:13:52.038121 kernel: scsi host5: ahci Sep 13 00:13:52.036897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:13:52.042515 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 13 00:13:52.042538 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 13 00:13:52.042549 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 13 00:13:52.043205 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 13 00:13:52.045542 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Sep 13 00:13:52.045571 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 13 00:13:52.048071 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 13 00:13:52.064363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:13:52.071639 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:13:52.079065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:13:52.079150 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:13:52.087387 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:13:52.102182 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:13:52.105406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:13:52.126231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:13:52.128994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:13:52.154827 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:13:52.162006 disk-uuid[554]: Primary Header is updated. Sep 13 00:13:52.162006 disk-uuid[554]: Secondary Entries is updated. Sep 13 00:13:52.162006 disk-uuid[554]: Secondary Header is updated. Sep 13 00:13:52.167086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:13:52.173064 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:13:52.355706 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:13:52.356236 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:13:52.356553 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:13:52.357071 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:13:52.360151 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:13:52.360182 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:13:52.360212 kernel: ata3.00: applying bridge limits Sep 13 00:13:52.361519 kernel: ata3.00: configured for UDMA/100 Sep 13 00:13:52.364058 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:13:52.366062 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:13:52.429477 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:13:52.430167 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:13:52.445071 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:13:53.174056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:13:53.174395 disk-uuid[567]: The operation has completed successfully. Sep 13 00:13:53.203587 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:13:53.203800 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:13:53.238289 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:13:53.244772 sh[594]: Success Sep 13 00:13:53.261095 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:13:53.300904 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:13:53.325362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:13:53.327959 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:13:53.341998 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:13:53.342065 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:13:53.342077 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:13:53.342088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:13:53.342697 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:13:53.348624 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:13:53.351305 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:13:53.366398 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:13:53.369283 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:13:53.379974 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:13:53.380083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:13:53.380095 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:13:53.384618 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:13:53.394915 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:13:53.396750 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:13:53.407346 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:13:53.418234 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:13:53.491208 ignition[690]: Ignition 2.19.0 Sep 13 00:13:53.491223 ignition[690]: Stage: fetch-offline Sep 13 00:13:53.491265 ignition[690]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:13:53.491280 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:13:53.491366 ignition[690]: parsed url from cmdline: "" Sep 13 00:13:53.491370 ignition[690]: no config URL provided Sep 13 00:13:53.491375 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:13:53.491385 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:13:53.491415 ignition[690]: op(1): [started] loading QEMU firmware config module Sep 13 00:13:53.491423 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:13:53.502175 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:13:53.509215 ignition[690]: op(1): [finished] loading QEMU firmware config module Sep 13 00:13:53.511256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:13:53.552530 systemd-networkd[782]: lo: Link UP Sep 13 00:13:53.552541 systemd-networkd[782]: lo: Gained carrier Sep 13 00:13:53.554455 systemd-networkd[782]: Enumeration completed Sep 13 00:13:53.554650 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:13:53.554925 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:13:53.554930 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:13:53.560597 ignition[690]: parsing config with SHA512: a0e0df67a5c9d8d959551046cee7e9055fd09ae8b65d018a2ad0e16d6fd296887e2999bf1f2dbbb273e53ab806b38f146aa1371a0fd72f7844aadce78c09f6f3 Sep 13 00:13:53.556155 systemd-networkd[782]: eth0: Link UP Sep 13 00:13:53.556159 systemd-networkd[782]: eth0: Gained carrier Sep 13 00:13:53.556165 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:13:53.565448 systemd[1]: Reached target network.target - Network. Sep 13 00:13:53.571659 unknown[690]: fetched base config from "system" Sep 13 00:13:53.571677 unknown[690]: fetched user config from "qemu" Sep 13 00:13:53.572257 ignition[690]: fetch-offline: fetch-offline passed Sep 13 00:13:53.572334 ignition[690]: Ignition finished successfully Sep 13 00:13:53.576131 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:13:53.579020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:13:53.579330 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:13:53.585281 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:13:53.603199 ignition[788]: Ignition 2.19.0 Sep 13 00:13:53.603212 ignition[788]: Stage: kargs Sep 13 00:13:53.603437 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:13:53.603451 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:13:53.604337 ignition[788]: kargs: kargs passed Sep 13 00:13:53.604388 ignition[788]: Ignition finished successfully Sep 13 00:13:53.610880 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:13:53.622243 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:13:53.681296 ignition[796]: Ignition 2.19.0 Sep 13 00:13:53.681311 ignition[796]: Stage: disks Sep 13 00:13:53.681492 ignition[796]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:13:53.681511 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:13:53.682501 ignition[796]: disks: disks passed Sep 13 00:13:53.682570 ignition[796]: Ignition finished successfully Sep 13 00:13:53.688307 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:13:53.690580 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:13:53.691735 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:13:53.693935 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:13:53.695168 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:13:53.699160 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:13:53.710274 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:13:53.726476 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:13:53.733770 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:13:53.743147 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:13:53.755955 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.132 Sep 13 00:13:53.755987 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Sep 13 00:13:53.838080 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:13:53.838846 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:13:53.839706 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:13:53.855139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:13:53.857510 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:13:53.857854 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:13:53.857898 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:13:53.866068 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 13 00:13:53.857925 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:13:53.869665 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:13:53.869681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:13:53.869692 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:13:53.872041 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:13:53.873665 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:13:53.888814 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:13:53.890043 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:13:53.931314 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:13:53.936695 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:13:53.941685 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:13:53.946629 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:13:54.044523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:13:54.056119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:13:54.058122 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:13:54.066058 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:13:54.087285 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:13:54.099203 ignition[930]: INFO : Ignition 2.19.0 Sep 13 00:13:54.099203 ignition[930]: INFO : Stage: mount Sep 13 00:13:54.101325 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:13:54.101325 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:13:54.101325 ignition[930]: INFO : mount: mount passed Sep 13 00:13:54.101325 ignition[930]: INFO : Ignition finished successfully Sep 13 00:13:54.108068 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:13:54.118266 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:13:54.340569 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:13:54.358388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:13:54.367523 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Sep 13 00:13:54.367623 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:13:54.367635 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:13:54.368376 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:13:54.372075 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:13:54.373663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:13:54.393323 ignition[961]: INFO : Ignition 2.19.0 Sep 13 00:13:54.393323 ignition[961]: INFO : Stage: files Sep 13 00:13:54.395578 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:13:54.395578 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:13:54.395578 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:13:54.395578 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:13:54.395578 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:13:54.402611 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:13:54.402611 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:13:54.402611 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:13:54.402611 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:13:54.402611 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:13:54.402611 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:13:54.402611 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:13:54.399501 unknown[961]: wrote ssh authorized keys file for user: core Sep 13 00:13:54.453184 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:13:55.019999 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:13:55.082679 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:13:55.082679 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:13:55.082679 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:13:55.082679 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:13:55.082679 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:13:55.460301 systemd-networkd[782]: eth0: Gained IPv6LL Sep 13 00:13:55.566530 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:13:56.227203 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:13:56.227203 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 13 00:13:56.231300 ignition[961]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:13:56.274564 ignition[961]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:13:56.281266 ignition[961]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:13:56.282985 ignition[961]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:13:56.282985 ignition[961]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:13:56.282985 ignition[961]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:13:56.287337 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:13:56.289273 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:13:56.290892 ignition[961]: INFO : files: files passed Sep 13 00:13:56.291634 ignition[961]: INFO : Ignition finished successfully Sep 13 00:13:56.295267 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:13:56.302213 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:13:56.304887 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:13:56.309761 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:13:56.309930 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:13:56.325912 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:13:56.330922 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:13:56.330922 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:13:56.335571 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:13:56.334024 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:13:56.336452 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:13:56.344459 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:13:56.378527 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:13:56.379609 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:13:56.382357 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:13:56.384416 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:13:56.386499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:13:56.389210 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:13:56.411435 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:13:56.422353 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:13:56.435780 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:13:56.438371 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:13:56.440934 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:13:56.442764 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:13:56.443810 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:13:56.446423 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:13:56.448577 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:13:56.450434 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:13:56.452595 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:13:56.455102 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:13:56.457550 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:13:56.459659 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:13:56.462100 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:13:56.464240 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:13:56.466409 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:13:56.468069 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:13:56.469210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:13:56.471961 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:13:56.474350 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:13:56.476790 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:13:56.477776 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:13:56.480377 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:13:56.481420 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:13:56.483774 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:13:56.484856 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:13:56.487230 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:13:56.489235 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:13:56.494095 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:13:56.496810 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:13:56.498758 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:13:56.500647 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:13:56.501549 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:13:56.503873 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:13:56.504913 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:13:56.507307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:13:56.508658 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:13:56.511409 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:13:56.512436 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:13:56.532462 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:13:56.535763 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:13:56.537777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:13:56.539200 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:13:56.541865 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:13:56.542967 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:13:56.545356 ignition[1016]: INFO : Ignition 2.19.0 Sep 13 00:13:56.545356 ignition[1016]: INFO : Stage: umount Sep 13 00:13:56.545356 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:13:56.545356 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:13:56.547538 ignition[1016]: INFO : umount: umount passed Sep 13 00:13:56.549206 ignition[1016]: INFO : Ignition finished successfully Sep 13 00:13:56.552583 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:13:56.553666 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:13:56.558449 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:13:56.558601 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:13:56.563489 systemd[1]: Stopped target network.target - Network. Sep 13 00:13:56.565430 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:13:56.565532 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:13:56.568793 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:13:56.569814 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:13:56.571965 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:13:56.573010 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:13:56.575577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:13:56.576735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:13:56.579536 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:13:56.582182 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:13:56.585713 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:13:56.588084 systemd-networkd[782]: eth0: DHCPv6 lease lost Sep 13 00:13:56.590527 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:13:56.591647 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:13:56.594511 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:13:56.595573 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:13:56.600392 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:13:56.600464 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:13:56.613266 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:13:56.614282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:13:56.614365 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:13:56.616603 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:13:56.616656 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:13:56.619070 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:13:56.619123 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:13:56.621571 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:13:56.621624 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:13:56.623890 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:13:56.635835 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:13:56.635979 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:13:56.651378 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:13:56.651622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:13:56.654171 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:13:56.654241 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:13:56.656314 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:13:56.656367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:13:56.658498 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:13:56.658562 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:13:56.661146 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:13:56.661219 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:13:56.663848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:13:56.663920 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:13:56.675204 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:13:56.676316 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:13:56.676379 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:13:56.678698 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:13:56.678771 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:13:56.681102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:13:56.681165 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:13:56.682479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:13:56.682543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:13:56.683000 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:13:56.683170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:13:56.765962 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:13:56.766132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:13:56.768123 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:13:56.769978 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:13:56.770047 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:13:56.785320 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:13:56.794271 systemd[1]: Switching root. Sep 13 00:13:56.830245 systemd-journald[192]: Journal stopped Sep 13 00:13:58.139480 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 13 00:13:58.139590 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:13:58.139605 kernel: SELinux: policy capability open_perms=1 Sep 13 00:13:58.139628 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:13:58.139640 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:13:58.139652 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:13:58.139684 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:13:58.139697 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:13:58.139708 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:13:58.139721 kernel: audit: type=1403 audit(1757722437.329:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:13:58.139734 systemd[1]: Successfully loaded SELinux policy in 42.430ms. Sep 13 00:13:58.139762 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.892ms. Sep 13 00:13:58.139775 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:13:58.139788 systemd[1]: Detected virtualization kvm. Sep 13 00:13:58.139800 systemd[1]: Detected architecture x86-64. Sep 13 00:13:58.139815 systemd[1]: Detected first boot. Sep 13 00:13:58.139827 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:13:58.139840 zram_generator::config[1082]: No configuration found. Sep 13 00:13:58.139862 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:13:58.139874 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:13:58.139886 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:13:58.139900 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:13:58.139916 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:13:58.139937 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:13:58.139954 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:13:58.139968 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:13:58.139981 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:13:58.139993 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:13:58.140009 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:13:58.140022 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:13:58.140590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:13:58.140615 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:13:58.140628 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:13:58.140641 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:13:58.140654 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:13:58.140666 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:13:58.140690 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:13:58.140703 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:13:58.140715 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:13:58.140728 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:13:58.140744 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:13:58.140757 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:13:58.140769 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:13:58.140781 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:13:58.140793 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:13:58.140810 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:13:58.140823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:13:58.140835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:13:58.140847 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:13:58.140862 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:13:58.140875 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:13:58.140887 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:13:58.140900 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:13:58.140912 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:13:58.140924 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:13:58.140936 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:13:58.140948 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:13:58.140964 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:13:58.140976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:13:58.140996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:13:58.141008 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:13:58.141020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:13:58.141064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:13:58.141077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:13:58.141089 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:13:58.141101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:13:58.141130 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:13:58.141147 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:13:58.141163 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:13:58.141175 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:13:58.141187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:13:58.141255 systemd-journald[1156]: Collecting audit messages is disabled. Sep 13 00:13:58.141279 kernel: fuse: init (API version 7.39) Sep 13 00:13:58.141296 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:13:58.141309 systemd-journald[1156]: Journal started Sep 13 00:13:58.141332 systemd-journald[1156]: Runtime Journal (/run/log/journal/1de8370ce96f44dab2357199c6b906cd) is 6.0M, max 48.3M, 42.2M free. Sep 13 00:13:58.146019 kernel: loop: module loaded Sep 13 00:13:58.148657 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:13:58.154048 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:13:58.157175 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:13:58.161160 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:13:58.167405 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:13:58.168666 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:13:58.171258 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:13:58.172478 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:13:58.173779 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:13:58.175238 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:13:58.176692 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:13:58.178428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:13:58.178710 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:13:58.180317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:13:58.180537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:13:58.182120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:13:58.182340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:13:58.183999 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:13:58.184230 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:13:58.185713 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:13:58.185927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:13:58.193882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:13:58.195469 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:13:58.197179 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:13:58.208067 kernel: ACPI: bus type drm_connector registered Sep 13 00:13:58.208521 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:13:58.208810 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:13:58.213890 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:13:58.223186 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:13:58.226098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:13:58.227228 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:13:58.232271 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:13:58.250682 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:13:58.252403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:13:58.255922 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:13:58.257489 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:13:58.261401 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:13:58.267383 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:13:58.274053 systemd-journald[1156]: Time spent on flushing to /var/log/journal/1de8370ce96f44dab2357199c6b906cd is 14.243ms for 985 entries. Sep 13 00:13:58.274053 systemd-journald[1156]: System Journal (/var/log/journal/1de8370ce96f44dab2357199c6b906cd) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:13:58.740998 systemd-journald[1156]: Received client request to flush runtime journal. Sep 13 00:13:58.275875 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:13:58.277282 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:13:58.299525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:13:58.306438 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 13 00:13:58.306452 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 13 00:13:58.313274 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:13:58.318997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:13:58.337219 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:13:58.348104 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:13:58.421459 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:13:58.424229 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:13:58.615653 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:13:58.647242 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:13:58.702364 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:13:58.714188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:13:58.734070 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Sep 13 00:13:58.734085 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Sep 13 00:13:58.740651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:13:58.743705 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:13:59.271804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:13:59.289421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:13:59.317964 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Sep 13 00:13:59.336905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:13:59.349230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:13:59.366246 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:13:59.410525 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 13 00:13:59.415130 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1257) Sep 13 00:13:59.439643 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:13:59.464095 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:13:59.472086 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:13:59.473838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:13:59.494774 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 13 00:13:59.495229 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:13:59.495403 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:13:59.496156 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:13:59.528065 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:13:59.530523 systemd-networkd[1251]: lo: Link UP Sep 13 00:13:59.541375 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:13:59.530731 systemd-networkd[1251]: lo: Gained carrier Sep 13 00:13:59.532503 systemd-networkd[1251]: Enumeration completed Sep 13 00:13:59.532943 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:13:59.532947 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:13:59.535201 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:13:59.536655 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:13:59.539683 systemd-networkd[1251]: eth0: Link UP Sep 13 00:13:59.539688 systemd-networkd[1251]: eth0: Gained carrier Sep 13 00:13:59.539703 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:13:59.553093 systemd-networkd[1251]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:13:59.554187 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:13:59.590049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:13:59.590450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:13:59.593598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:13:59.644066 kernel: kvm_amd: TSC scaling supported Sep 13 00:13:59.644158 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:13:59.644172 kernel: kvm_amd: Nested Paging enabled Sep 13 00:13:59.644184 kernel: kvm_amd: LBR virtualization supported Sep 13 00:13:59.644196 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:13:59.644212 kernel: kvm_amd: Virtual GIF supported Sep 13 00:13:59.666011 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:13:59.668800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:13:59.701787 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:13:59.722419 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:13:59.730552 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:13:59.775495 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:13:59.777092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:13:59.791174 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:13:59.797762 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:13:59.830802 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:13:59.832360 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:13:59.833632 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:13:59.833657 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:13:59.834676 systemd[1]: Reached target machines.target - Containers. Sep 13 00:13:59.836772 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:13:59.849237 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:13:59.875497 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:13:59.876930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:13:59.877994 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:13:59.880766 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:13:59.884650 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:13:59.886800 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:13:59.901378 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:14:00.010058 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:14:00.034065 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:14:00.066073 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:14:00.106066 kernel: loop2: detected capacity change from 0 to 221472 Sep 13 00:14:00.135059 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:14:00.145086 kernel: loop4: detected capacity change from 0 to 140768 Sep 13 00:14:00.157070 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 00:14:00.204422 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:14:00.205104 (sd-merge)[1318]: Merged extensions into '/usr'. Sep 13 00:14:00.209410 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:14:00.209426 systemd[1]: Reloading... Sep 13 00:14:00.275091 zram_generator::config[1349]: No configuration found. Sep 13 00:14:00.361779 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:14:00.420342 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:14:00.487859 systemd[1]: Reloading finished in 277 ms. Sep 13 00:14:00.513441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:14:00.515069 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:14:00.517561 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:14:00.519700 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:14:00.540303 systemd[1]: Starting ensure-sysext.service... Sep 13 00:14:00.545681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:14:00.549077 systemd[1]: Reloading requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:14:00.549092 systemd[1]: Reloading... Sep 13 00:14:00.572433 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:14:00.572862 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:14:00.573939 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:14:00.574312 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Sep 13 00:14:00.574406 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Sep 13 00:14:00.578288 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:14:00.578302 systemd-tmpfiles[1394]: Skipping /boot Sep 13 00:14:00.599225 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:14:00.599247 systemd-tmpfiles[1394]: Skipping /boot Sep 13 00:14:00.615058 zram_generator::config[1420]: No configuration found. Sep 13 00:14:00.748428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:14:00.819671 systemd[1]: Reloading finished in 270 ms. Sep 13 00:14:00.842732 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:14:00.863622 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:14:00.883964 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:14:00.887139 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:14:00.893242 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:14:00.901730 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:14:00.907651 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:14:00.908457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:14:00.911175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:14:00.915396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:14:00.920623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:14:00.922134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:14:00.922646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:14:00.930671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:14:00.930993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:14:00.940008 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:14:00.942326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:14:00.942565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:14:00.944629 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:14:00.944867 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:14:00.953978 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:14:00.954261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:14:00.955784 augenrules[1499]: No rules Sep 13 00:14:00.968577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:14:00.975371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:14:00.982974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:14:00.984369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:14:00.988476 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:14:00.991393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:14:00.993812 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:14:00.996818 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:14:01.009870 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:14:01.012099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:14:01.012351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:14:01.018805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:14:01.019130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:14:01.020874 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:14:01.021139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:14:01.034297 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:14:01.037847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:14:01.038134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:14:01.047371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:14:01.052445 systemd-resolved[1472]: Positive Trust Anchors: Sep 13 00:14:01.052464 systemd-resolved[1472]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:14:01.052497 systemd-resolved[1472]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:14:01.060907 systemd-resolved[1472]: Defaulting to hostname 'linux'. Sep 13 00:14:01.061253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:14:01.063909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:14:01.067302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:14:01.069310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:14:01.069387 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:14:01.069414 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:14:01.069868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:14:01.072228 systemd[1]: Finished ensure-sysext.service. Sep 13 00:14:01.073818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:14:01.074080 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:14:01.075868 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:14:01.076124 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:14:01.077752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:14:01.077975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:14:01.079742 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:14:01.079965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:14:01.087419 systemd[1]: Reached target network.target - Network. Sep 13 00:14:01.088688 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:14:01.090203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:14:01.090281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:14:01.108255 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:14:01.188458 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:14:02.481454 systemd-timesyncd[1540]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:14:02.481467 systemd-resolved[1472]: Clock change detected. Flushing caches. Sep 13 00:14:02.481497 systemd-timesyncd[1540]: Initial clock synchronization to Sat 2025-09-13 00:14:02.481341 UTC. Sep 13 00:14:02.482150 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:14:02.483435 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:14:02.485074 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:14:02.486427 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:14:02.487783 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:14:02.487802 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:14:02.488781 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:14:02.490095 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:14:02.491424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:14:02.492841 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:14:02.494716 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:14:02.497922 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:14:02.500646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:14:02.509283 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:14:02.510538 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:14:02.511543 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:14:02.512646 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:14:02.512699 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:14:02.512726 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:14:02.514132 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:14:02.516403 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:14:02.521469 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:14:02.526045 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:14:02.527559 jq[1546]: false Sep 13 00:14:02.527886 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:14:02.529546 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:14:02.533759 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:14:02.536723 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:14:02.541978 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:14:02.546964 extend-filesystems[1548]: Found loop3 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found loop4 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found loop5 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found sr0 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda1 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda2 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda3 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found usr Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda4 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda6 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda7 Sep 13 00:14:02.552410 extend-filesystems[1548]: Found vda9 Sep 13 00:14:02.552410 extend-filesystems[1548]: Checking size of /dev/vda9 Sep 13 00:14:02.548747 dbus-daemon[1545]: [system] SELinux support is enabled Sep 13 00:14:02.557342 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:14:02.559898 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:14:02.561531 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:14:02.568701 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:14:02.570268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:14:02.578755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:14:02.579090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:14:02.579454 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:14:02.579542 jq[1569]: true Sep 13 00:14:02.579764 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:14:02.584807 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:14:02.585292 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:14:02.591673 update_engine[1567]: I20250913 00:14:02.591591 1567 main.cc:92] Flatcar Update Engine starting Sep 13 00:14:02.592975 update_engine[1567]: I20250913 00:14:02.592934 1567 update_check_scheduler.cc:74] Next update check in 6m45s Sep 13 00:14:02.613258 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:14:02.618447 extend-filesystems[1548]: Resized partition /dev/vda9 Sep 13 00:14:02.629650 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:14:02.632384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1249) Sep 13 00:14:02.623299 systemd-logind[1564]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:14:02.623337 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:14:02.623686 systemd-logind[1564]: New seat seat0. Sep 13 00:14:02.625310 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:14:02.630045 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:14:02.640598 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:14:02.640750 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:14:02.643062 jq[1575]: true Sep 13 00:14:02.643874 tar[1573]: linux-amd64/helm Sep 13 00:14:02.647841 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:14:02.647164 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:14:02.647289 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:14:02.649913 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:14:02.661606 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:14:02.791998 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:14:02.818052 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:14:02.819526 systemd-networkd[1251]: eth0: Gained IPv6LL Sep 13 00:14:02.835664 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:14:02.837789 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:14:02.841167 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:14:02.844759 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:14:02.879832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:02.884625 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:14:02.887135 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:14:02.887584 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:14:02.888150 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:14:02.911019 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:14:02.914443 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:14:02.914866 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:14:02.917124 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:14:02.933256 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:14:02.944552 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:14:02.946797 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:14:02.948433 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:14:02.961204 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:14:03.021358 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:14:04.328662 extend-filesystems[1586]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:14:04.328662 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:14:04.328662 extend-filesystems[1586]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:14:04.333217 extend-filesystems[1548]: Resized filesystem in /dev/vda9 Sep 13 00:14:04.334471 containerd[1576]: time="2025-09-13T00:14:04.332994212Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:14:04.335357 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:14:04.335836 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:14:04.404914 containerd[1576]: time="2025-09-13T00:14:04.404840680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:14:04.407058 tar[1573]: linux-amd64/LICENSE Sep 13 00:14:04.407629 tar[1573]: linux-amd64/README.md Sep 13 00:14:04.424694 containerd[1576]: time="2025-09-13T00:14:04.424618821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:14:04.424694 containerd[1576]: time="2025-09-13T00:14:04.424673984Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:14:04.424694 containerd[1576]: time="2025-09-13T00:14:04.424698340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:14:04.424971 containerd[1576]: time="2025-09-13T00:14:04.424950913Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:14:04.425004 containerd[1576]: time="2025-09-13T00:14:04.424972795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:14:04.425096 containerd[1576]: time="2025-09-13T00:14:04.425075768Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:14:04.425127 containerd[1576]: time="2025-09-13T00:14:04.425096967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:14:04.425500 containerd[1576]: time="2025-09-13T00:14:04.425471039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:14:04.425500 containerd[1576]: time="2025-09-13T00:14:04.425499222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:14:04.425564 containerd[1576]: time="2025-09-13T00:14:04.425515683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:14:04.425564 containerd[1576]: time="2025-09-13T00:14:04.425526503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:14:04.425670 containerd[1576]: time="2025-09-13T00:14:04.425653160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:14:04.426003 containerd[1576]: time="2025-09-13T00:14:04.425976146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:14:04.426199 containerd[1576]: time="2025-09-13T00:14:04.426173807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:14:04.426199 containerd[1576]: time="2025-09-13T00:14:04.426190528Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:14:04.426526 containerd[1576]: time="2025-09-13T00:14:04.426358874Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:14:04.426526 containerd[1576]: time="2025-09-13T00:14:04.426439996Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:14:04.435090 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:14:04.501985 bash[1606]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:14:04.504794 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:14:04.507628 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:14:04.511864 containerd[1576]: time="2025-09-13T00:14:04.511795017Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:14:04.511929 containerd[1576]: time="2025-09-13T00:14:04.511871851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:14:04.511929 containerd[1576]: time="2025-09-13T00:14:04.511900585Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:14:04.511929 containerd[1576]: time="2025-09-13T00:14:04.511918449Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:14:04.511983 containerd[1576]: time="2025-09-13T00:14:04.511932505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:14:04.512139 containerd[1576]: time="2025-09-13T00:14:04.512113084Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:14:04.512657 containerd[1576]: time="2025-09-13T00:14:04.512616568Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:14:04.512776 containerd[1576]: time="2025-09-13T00:14:04.512757853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:14:04.512799 containerd[1576]: time="2025-09-13T00:14:04.512779273Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:14:04.512830 containerd[1576]: time="2025-09-13T00:14:04.512815501Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:14:04.512851 containerd[1576]: time="2025-09-13T00:14:04.512830178Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512851 containerd[1576]: time="2025-09-13T00:14:04.512843113Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512902 containerd[1576]: time="2025-09-13T00:14:04.512856849Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512902 containerd[1576]: time="2025-09-13T00:14:04.512871546Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512902 containerd[1576]: time="2025-09-13T00:14:04.512886123Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512967 containerd[1576]: time="2025-09-13T00:14:04.512906462Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512967 containerd[1576]: time="2025-09-13T00:14:04.512920818Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512967 containerd[1576]: time="2025-09-13T00:14:04.512932681Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:14:04.512967 containerd[1576]: time="2025-09-13T00:14:04.512952287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.512967 containerd[1576]: time="2025-09-13T00:14:04.512965452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513066 containerd[1576]: time="2025-09-13T00:14:04.512978126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513066 containerd[1576]: time="2025-09-13T00:14:04.512990559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513066 containerd[1576]: time="2025-09-13T00:14:04.513002672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513066 containerd[1576]: time="2025-09-13T00:14:04.513015516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513066 containerd[1576]: time="2025-09-13T00:14:04.513032217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513066 containerd[1576]: time="2025-09-13T00:14:04.513045502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513066 containerd[1576]: time="2025-09-13T00:14:04.513059298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513074186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513087832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513099183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513111155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513125923Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513144668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513156901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513209 containerd[1576]: time="2025-09-13T00:14:04.513167381Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513217415Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513235589Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513246750Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513259504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513269052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513281435Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513291273Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:14:04.513519 containerd[1576]: time="2025-09-13T00:14:04.513301452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:14:04.513707 containerd[1576]: time="2025-09-13T00:14:04.513620240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:14:04.513707 containerd[1576]: time="2025-09-13T00:14:04.513681355Z" level=info msg="Connect containerd service" Sep 13 00:14:04.513914 containerd[1576]: time="2025-09-13T00:14:04.513724405Z" level=info msg="using legacy CRI server" Sep 13 00:14:04.513914 containerd[1576]: time="2025-09-13T00:14:04.513733432Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:14:04.513914 containerd[1576]: time="2025-09-13T00:14:04.513849450Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:14:04.514460 containerd[1576]: time="2025-09-13T00:14:04.514435058Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:14:04.514690 containerd[1576]: time="2025-09-13T00:14:04.514608654Z" level=info msg="Start subscribing containerd event" Sep 13 00:14:04.514751 containerd[1576]: time="2025-09-13T00:14:04.514718179Z" level=info msg="Start recovering state" Sep 13 00:14:04.514847 containerd[1576]: time="2025-09-13T00:14:04.514827614Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:14:04.514875 containerd[1576]: time="2025-09-13T00:14:04.514840749Z" level=info msg="Start event monitor" Sep 13 00:14:04.514895 containerd[1576]: time="2025-09-13T00:14:04.514870515Z" level=info msg="Start snapshots syncer" Sep 13 00:14:04.514921 containerd[1576]: time="2025-09-13T00:14:04.514892757Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:14:04.514946 containerd[1576]: time="2025-09-13T00:14:04.514894189Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:14:04.514946 containerd[1576]: time="2025-09-13T00:14:04.514929576Z" level=info msg="Start streaming server" Sep 13 00:14:04.515241 containerd[1576]: time="2025-09-13T00:14:04.515072233Z" level=info msg="containerd successfully booted in 1.309855s" Sep 13 00:14:04.515268 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:14:05.219681 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:14:05.307771 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:46168.service - OpenSSH per-connection server daemon (10.0.0.1:46168). Sep 13 00:14:05.369979 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 46168 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:14:05.374141 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:05.386342 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:14:05.433112 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:14:05.437432 systemd-logind[1564]: New session 1 of user core. Sep 13 00:14:05.464765 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:14:05.487761 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:14:05.492333 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:14:05.664847 systemd[1678]: Queued start job for default target default.target. Sep 13 00:14:05.665655 systemd[1678]: Created slice app.slice - User Application Slice. Sep 13 00:14:05.665682 systemd[1678]: Reached target paths.target - Paths. Sep 13 00:14:05.665695 systemd[1678]: Reached target timers.target - Timers. Sep 13 00:14:05.673410 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:14:05.682811 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:14:05.682900 systemd[1678]: Reached target sockets.target - Sockets. Sep 13 00:14:05.682917 systemd[1678]: Reached target basic.target - Basic System. Sep 13 00:14:05.682967 systemd[1678]: Reached target default.target - Main User Target. Sep 13 00:14:05.683012 systemd[1678]: Startup finished in 180ms. Sep 13 00:14:05.754535 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:14:05.776630 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:14:05.835689 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:46184.service - OpenSSH per-connection server daemon (10.0.0.1:46184). Sep 13 00:14:05.882450 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 46184 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:14:05.889648 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:05.895608 systemd-logind[1564]: New session 2 of user core. Sep 13 00:14:05.897287 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:14:05.900492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:05.904758 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:14:05.906310 systemd[1]: Startup finished in 8.004s (kernel) + 7.338s (userspace) = 15.342s. Sep 13 00:14:05.907006 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:14:06.044697 sshd[1690]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:06.057754 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:46198.service - OpenSSH per-connection server daemon (10.0.0.1:46198). Sep 13 00:14:06.058460 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:46184.service: Deactivated successfully. Sep 13 00:14:06.064118 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:14:06.064267 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:14:06.068441 systemd-logind[1564]: Removed session 2. Sep 13 00:14:06.095020 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 46198 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:14:06.095612 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:06.100263 systemd-logind[1564]: New session 3 of user core. Sep 13 00:14:06.107817 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:14:06.165027 sshd[1707]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:06.167223 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:46206.service - OpenSSH per-connection server daemon (10.0.0.1:46206). Sep 13 00:14:06.169025 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:46198.service: Deactivated successfully. Sep 13 00:14:06.172739 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:14:06.174110 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:14:06.176086 systemd-logind[1564]: Removed session 3. Sep 13 00:14:06.205004 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 46206 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:14:06.207713 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:06.214673 systemd-logind[1564]: New session 4 of user core. Sep 13 00:14:06.292974 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:14:06.353651 sshd[1722]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:06.364684 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:46214.service - OpenSSH per-connection server daemon (10.0.0.1:46214). Sep 13 00:14:06.365360 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:46206.service: Deactivated successfully. Sep 13 00:14:06.369504 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:14:06.372137 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:14:06.373383 systemd-logind[1564]: Removed session 4. Sep 13 00:14:06.400065 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 46214 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:14:06.402067 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:06.406615 systemd-logind[1564]: New session 5 of user core. Sep 13 00:14:06.420796 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:14:06.482779 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:14:06.483262 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:14:06.503105 sudo[1737]: pam_unix(sudo:session): session closed for user root Sep 13 00:14:06.505682 sshd[1730]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:06.514551 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:46220.service - OpenSSH per-connection server daemon (10.0.0.1:46220). Sep 13 00:14:06.515049 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:46214.service: Deactivated successfully. Sep 13 00:14:06.519020 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:14:06.520054 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:14:06.521236 systemd-logind[1564]: Removed session 5. Sep 13 00:14:06.546338 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 46220 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:14:06.548367 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:06.553014 systemd-logind[1564]: New session 6 of user core. Sep 13 00:14:06.563604 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:14:06.666962 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:14:06.667421 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:14:06.672878 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 13 00:14:06.681939 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:14:06.682425 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:14:06.701525 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:14:06.705407 auditctl[1750]: No rules Sep 13 00:14:06.705972 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:14:06.706284 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:14:06.709656 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:14:06.756903 augenrules[1769]: No rules Sep 13 00:14:06.759274 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:14:06.762223 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 13 00:14:06.768508 sshd[1739]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:06.777697 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:46222.service - OpenSSH per-connection server daemon (10.0.0.1:46222). Sep 13 00:14:06.778274 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:46220.service: Deactivated successfully. Sep 13 00:14:06.788234 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:14:06.789598 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:14:06.791308 systemd-logind[1564]: Removed session 6. Sep 13 00:14:06.923026 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 46222 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:14:06.925436 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:06.930545 systemd-logind[1564]: New session 7 of user core. Sep 13 00:14:06.937341 kubelet[1699]: E0913 00:14:06.937261 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:14:06.939790 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:14:06.941923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:14:06.942290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:14:07.004237 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:14:07.004706 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:14:07.617548 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:14:07.618151 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:14:07.928001 dockerd[1803]: time="2025-09-13T00:14:07.927816527Z" level=info msg="Starting up" Sep 13 00:14:10.147179 dockerd[1803]: time="2025-09-13T00:14:10.147078335Z" level=info msg="Loading containers: start." Sep 13 00:14:11.008367 kernel: Initializing XFRM netlink socket Sep 13 00:14:11.124605 systemd-networkd[1251]: docker0: Link UP Sep 13 00:14:11.462308 dockerd[1803]: time="2025-09-13T00:14:11.462143822Z" level=info msg="Loading containers: done." Sep 13 00:14:11.493296 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1819106499-merged.mount: Deactivated successfully. Sep 13 00:14:11.609542 dockerd[1803]: time="2025-09-13T00:14:11.609443011Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:14:11.609803 dockerd[1803]: time="2025-09-13T00:14:11.609649418Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:14:11.609866 dockerd[1803]: time="2025-09-13T00:14:11.609837140Z" level=info msg="Daemon has completed initialization" Sep 13 00:14:12.157950 dockerd[1803]: time="2025-09-13T00:14:12.157811918Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:14:12.158202 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:14:13.480332 containerd[1576]: time="2025-09-13T00:14:13.480244902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:14:15.376856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819171309.mount: Deactivated successfully. Sep 13 00:14:17.164263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:14:17.183736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:17.514745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:17.522072 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:14:17.802595 kubelet[2024]: E0913 00:14:17.802336 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:14:17.810922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:14:17.811223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:14:18.590726 containerd[1576]: time="2025-09-13T00:14:18.590634807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:18.670588 containerd[1576]: time="2025-09-13T00:14:18.670501476Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:14:18.728009 containerd[1576]: time="2025-09-13T00:14:18.727896492Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:18.782191 containerd[1576]: time="2025-09-13T00:14:18.782111154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:18.783180 containerd[1576]: time="2025-09-13T00:14:18.783118543Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 5.302804441s" Sep 13 00:14:18.783240 containerd[1576]: time="2025-09-13T00:14:18.783178525Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:14:18.784041 containerd[1576]: time="2025-09-13T00:14:18.784003402Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:14:24.247009 containerd[1576]: time="2025-09-13T00:14:24.246765421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:24.349871 containerd[1576]: time="2025-09-13T00:14:24.349765333Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:14:24.413442 containerd[1576]: time="2025-09-13T00:14:24.413355958Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:24.488067 containerd[1576]: time="2025-09-13T00:14:24.487965207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:24.489998 containerd[1576]: time="2025-09-13T00:14:24.489913000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 5.705865775s" Sep 13 00:14:24.489998 containerd[1576]: time="2025-09-13T00:14:24.489979885Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:14:24.490750 containerd[1576]: time="2025-09-13T00:14:24.490591763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:14:27.910525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:14:27.927985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:28.322012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:28.332041 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:14:29.063857 containerd[1576]: time="2025-09-13T00:14:29.063544491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:29.066056 containerd[1576]: time="2025-09-13T00:14:29.065690095Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:14:29.070154 containerd[1576]: time="2025-09-13T00:14:29.069196260Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:29.074391 containerd[1576]: time="2025-09-13T00:14:29.074250128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:29.078476 containerd[1576]: time="2025-09-13T00:14:29.076911179Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 4.586278099s" Sep 13 00:14:29.078476 containerd[1576]: time="2025-09-13T00:14:29.076989736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:14:29.078476 containerd[1576]: time="2025-09-13T00:14:29.077771161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:14:29.092716 kubelet[2053]: E0913 00:14:29.092626 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:14:29.100192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:14:29.101119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:14:30.957547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2888023308.mount: Deactivated successfully. Sep 13 00:14:31.734652 containerd[1576]: time="2025-09-13T00:14:31.734554488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:31.736108 containerd[1576]: time="2025-09-13T00:14:31.735968500Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:14:31.737525 containerd[1576]: time="2025-09-13T00:14:31.737443817Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:31.740029 containerd[1576]: time="2025-09-13T00:14:31.739952522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:31.740650 containerd[1576]: time="2025-09-13T00:14:31.740615866Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.662802976s" Sep 13 00:14:31.740705 containerd[1576]: time="2025-09-13T00:14:31.740653636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:14:31.741415 containerd[1576]: time="2025-09-13T00:14:31.741282295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:14:32.293741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461863800.mount: Deactivated successfully. Sep 13 00:14:35.294700 containerd[1576]: time="2025-09-13T00:14:35.294605571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:35.307095 containerd[1576]: time="2025-09-13T00:14:35.306984971Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:14:35.311926 containerd[1576]: time="2025-09-13T00:14:35.311871407Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:35.316963 containerd[1576]: time="2025-09-13T00:14:35.316395161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:35.319512 containerd[1576]: time="2025-09-13T00:14:35.319424195Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.578047084s" Sep 13 00:14:35.319575 containerd[1576]: time="2025-09-13T00:14:35.319518661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:14:35.320267 containerd[1576]: time="2025-09-13T00:14:35.320215979Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:14:36.637185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679861195.mount: Deactivated successfully. Sep 13 00:14:36.799818 containerd[1576]: time="2025-09-13T00:14:36.799697053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:36.804510 containerd[1576]: time="2025-09-13T00:14:36.804429398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:14:36.807917 containerd[1576]: time="2025-09-13T00:14:36.807856097Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:36.811616 containerd[1576]: time="2025-09-13T00:14:36.811538436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:36.812477 containerd[1576]: time="2025-09-13T00:14:36.812410082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.492148942s" Sep 13 00:14:36.812477 containerd[1576]: time="2025-09-13T00:14:36.812468875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:14:36.813454 containerd[1576]: time="2025-09-13T00:14:36.813400504Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:14:37.530014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896603538.mount: Deactivated successfully. Sep 13 00:14:39.157937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:14:39.199941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:39.579634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:39.586377 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:14:39.638256 kubelet[2191]: E0913 00:14:39.638180 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:14:39.643892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:14:39.644351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:14:40.604183 containerd[1576]: time="2025-09-13T00:14:40.604091269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:40.605909 containerd[1576]: time="2025-09-13T00:14:40.605864783Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:14:40.607891 containerd[1576]: time="2025-09-13T00:14:40.607852727Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:40.613880 containerd[1576]: time="2025-09-13T00:14:40.613798331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:40.615543 containerd[1576]: time="2025-09-13T00:14:40.615421738Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.801951203s" Sep 13 00:14:40.615543 containerd[1576]: time="2025-09-13T00:14:40.615507398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:14:43.495937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:43.509653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:43.545647 systemd[1]: Reloading requested from client PID 2233 ('systemctl') (unit session-7.scope)... Sep 13 00:14:43.545668 systemd[1]: Reloading... Sep 13 00:14:43.642360 zram_generator::config[2275]: No configuration found. Sep 13 00:14:43.848574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:14:43.944854 systemd[1]: Reloading finished in 398 ms. Sep 13 00:14:44.002024 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:14:44.002204 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:14:44.002781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:44.005234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:44.210775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:44.218565 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:14:44.423765 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:14:44.423765 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:14:44.423765 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:14:44.424394 kubelet[2333]: I0913 00:14:44.423825 2333 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:14:44.909550 kubelet[2333]: I0913 00:14:44.909485 2333 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:14:44.909550 kubelet[2333]: I0913 00:14:44.909520 2333 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:14:44.909794 kubelet[2333]: I0913 00:14:44.909781 2333 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:14:44.937608 kubelet[2333]: E0913 00:14:44.937528 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:44.937832 kubelet[2333]: I0913 00:14:44.937788 2333 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:14:44.945560 kubelet[2333]: E0913 00:14:44.945500 2333 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:14:44.945560 kubelet[2333]: I0913 00:14:44.945552 2333 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:14:44.953916 kubelet[2333]: I0913 00:14:44.953831 2333 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:14:44.955410 kubelet[2333]: I0913 00:14:44.955362 2333 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:14:44.955611 kubelet[2333]: I0913 00:14:44.955548 2333 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:14:44.955832 kubelet[2333]: I0913 00:14:44.955597 2333 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:14:44.955994 kubelet[2333]: I0913 00:14:44.955849 2333 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:14:44.955994 kubelet[2333]: I0913 00:14:44.955859 2333 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:14:44.956155 kubelet[2333]: I0913 00:14:44.956110 2333 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:14:44.958800 kubelet[2333]: I0913 00:14:44.958757 2333 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:14:44.958800 kubelet[2333]: I0913 00:14:44.958798 2333 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:14:44.958901 kubelet[2333]: I0913 00:14:44.958863 2333 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:14:44.958939 kubelet[2333]: I0913 00:14:44.958904 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:14:44.961991 kubelet[2333]: I0913 00:14:44.961930 2333 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:14:44.962729 kubelet[2333]: I0913 00:14:44.962660 2333 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:14:44.964861 kubelet[2333]: W0913 00:14:44.964411 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:14:44.965361 kubelet[2333]: W0913 00:14:44.965252 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:44.965433 kubelet[2333]: E0913 00:14:44.965383 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:44.965525 kubelet[2333]: W0913 00:14:44.965481 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:44.965570 kubelet[2333]: E0913 00:14:44.965534 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:44.967584 kubelet[2333]: I0913 00:14:44.967558 2333 server.go:1274] "Started kubelet" Sep 13 00:14:44.968863 kubelet[2333]: I0913 00:14:44.968810 2333 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:14:44.970701 kubelet[2333]: I0913 00:14:44.970660 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:14:44.970897 kubelet[2333]: I0913 00:14:44.970830 2333 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:14:45.086361 kubelet[2333]: I0913 00:14:44.971195 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:14:45.086361 kubelet[2333]: I0913 00:14:44.971396 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:14:45.106756 kubelet[2333]: I0913 00:14:45.103953 2333 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:14:45.106756 kubelet[2333]: I0913 00:14:45.104113 2333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:14:45.108804 kubelet[2333]: I0913 00:14:45.108768 2333 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:14:45.109150 kubelet[2333]: I0913 00:14:45.109089 2333 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:14:45.109377 kubelet[2333]: E0913 00:14:45.109344 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:14:45.114285 kubelet[2333]: I0913 00:14:45.114185 2333 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:14:45.114285 kubelet[2333]: E0913 00:14:45.114213 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Sep 13 00:14:45.114587 kubelet[2333]: I0913 00:14:45.114348 2333 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:14:45.114587 kubelet[2333]: E0913 00:14:45.108061 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af498a34bed0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:14:44.967522 +0000 UTC m=+0.743667249,LastTimestamp:2025-09-13 00:14:44.967522 +0000 UTC m=+0.743667249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:14:45.115225 kubelet[2333]: W0913 00:14:45.115151 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:45.115446 kubelet[2333]: E0913 00:14:45.115406 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:45.116936 kubelet[2333]: E0913 00:14:45.116911 2333 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:14:45.117150 kubelet[2333]: I0913 00:14:45.117013 2333 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:14:45.130014 kubelet[2333]: I0913 00:14:45.129932 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:14:45.131535 kubelet[2333]: I0913 00:14:45.131494 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:14:45.131535 kubelet[2333]: I0913 00:14:45.131537 2333 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:14:45.131730 kubelet[2333]: I0913 00:14:45.131710 2333 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:14:45.131840 kubelet[2333]: E0913 00:14:45.131803 2333 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:14:45.140490 kubelet[2333]: W0913 00:14:45.140370 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:45.140744 kubelet[2333]: E0913 00:14:45.140675 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:45.156963 kubelet[2333]: I0913 00:14:45.156910 2333 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:14:45.156963 kubelet[2333]: I0913 00:14:45.156935 2333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:14:45.156963 kubelet[2333]: I0913 00:14:45.156960 2333 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:14:45.210529 kubelet[2333]: E0913 00:14:45.210303 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:14:45.233036 kubelet[2333]: E0913 00:14:45.232911 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:14:45.310705 kubelet[2333]: E0913 00:14:45.310580 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:14:45.315655 kubelet[2333]: E0913 00:14:45.315569 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Sep 13 00:14:45.378561 kubelet[2333]: I0913 00:14:45.378467 2333 policy_none.go:49] "None policy: Start" Sep 13 00:14:45.379828 kubelet[2333]: I0913 00:14:45.379780 2333 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:14:45.379828 kubelet[2333]: I0913 00:14:45.379831 2333 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:14:45.411089 kubelet[2333]: E0913 00:14:45.411000 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:14:45.433903 kubelet[2333]: E0913 00:14:45.433776 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:14:45.512030 kubelet[2333]: E0913 00:14:45.511789 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:14:45.609893 kubelet[2333]: I0913 00:14:45.608518 2333 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:14:45.609893 kubelet[2333]: I0913 00:14:45.608876 2333 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:14:45.609893 kubelet[2333]: I0913 00:14:45.608899 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:14:45.609893 kubelet[2333]: I0913 00:14:45.609563 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:14:45.610983 kubelet[2333]: E0913 00:14:45.610958 2333 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:14:45.712610 kubelet[2333]: I0913 00:14:45.712540 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:14:45.713028 kubelet[2333]: E0913 00:14:45.712970 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Sep 13 00:14:45.716554 kubelet[2333]: E0913 00:14:45.716522 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Sep 13 00:14:45.818766 kubelet[2333]: W0913 00:14:45.818549 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:45.818766 kubelet[2333]: E0913 00:14:45.818620 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:45.915695 kubelet[2333]: I0913 00:14:45.915631 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:14:45.916148 kubelet[2333]: E0913 00:14:45.916110 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Sep 13 00:14:45.918493 kubelet[2333]: I0913 00:14:45.918433 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/350f4cca0a9add6933e9e537e89729b4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"350f4cca0a9add6933e9e537e89729b4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:45.918493 kubelet[2333]: I0913 00:14:45.918486 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:45.918698 kubelet[2333]: I0913 00:14:45.918520 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:45.918698 kubelet[2333]: I0913 00:14:45.918550 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:14:45.918698 kubelet[2333]: I0913 00:14:45.918572 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/350f4cca0a9add6933e9e537e89729b4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"350f4cca0a9add6933e9e537e89729b4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:45.918698 kubelet[2333]: I0913 00:14:45.918592 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/350f4cca0a9add6933e9e537e89729b4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"350f4cca0a9add6933e9e537e89729b4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:45.918698 kubelet[2333]: I0913 00:14:45.918612 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:45.918863 kubelet[2333]: I0913 00:14:45.918633 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:45.918863 kubelet[2333]: I0913 00:14:45.918668 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:46.141755 kubelet[2333]: E0913 00:14:46.141560 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:46.143224 containerd[1576]: time="2025-09-13T00:14:46.142628778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:350f4cca0a9add6933e9e537e89729b4,Namespace:kube-system,Attempt:0,}" Sep 13 00:14:46.144068 kubelet[2333]: E0913 00:14:46.144026 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:46.144944 containerd[1576]: time="2025-09-13T00:14:46.144859249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:14:46.145983 kubelet[2333]: E0913 00:14:46.145961 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:46.146561 containerd[1576]: time="2025-09-13T00:14:46.146504188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:14:46.173090 kubelet[2333]: W0913 00:14:46.172971 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:46.173090 kubelet[2333]: E0913 00:14:46.173093 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:46.299341 kubelet[2333]: W0913 00:14:46.299222 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:46.299341 kubelet[2333]: E0913 00:14:46.299332 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:46.318468 kubelet[2333]: I0913 00:14:46.318424 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:14:46.319008 kubelet[2333]: E0913 00:14:46.318955 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Sep 13 00:14:46.518248 kubelet[2333]: E0913 00:14:46.518135 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Sep 13 00:14:46.716948 kubelet[2333]: W0913 00:14:46.716798 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Sep 13 00:14:46.716948 kubelet[2333]: E0913 00:14:46.716948 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:47.010825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710300341.mount: Deactivated successfully. Sep 13 00:14:47.021039 containerd[1576]: time="2025-09-13T00:14:47.020942313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:14:47.022134 containerd[1576]: time="2025-09-13T00:14:47.022013489Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:14:47.023657 containerd[1576]: time="2025-09-13T00:14:47.023490355Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:14:47.025422 containerd[1576]: time="2025-09-13T00:14:47.025373620Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:14:47.026489 containerd[1576]: time="2025-09-13T00:14:47.026396143Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:14:47.027741 containerd[1576]: time="2025-09-13T00:14:47.027664740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:14:47.029062 containerd[1576]: time="2025-09-13T00:14:47.028965942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:14:47.031442 containerd[1576]: time="2025-09-13T00:14:47.031381887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:14:47.032220 containerd[1576]: time="2025-09-13T00:14:47.032172830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 887.131729ms" Sep 13 00:14:47.036071 containerd[1576]: time="2025-09-13T00:14:47.035840823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 889.214174ms" Sep 13 00:14:47.036458 kubelet[2333]: E0913 00:14:47.036414 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:14:47.039866 containerd[1576]: time="2025-09-13T00:14:47.039781686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 897.018116ms" Sep 13 00:14:47.120861 kubelet[2333]: I0913 00:14:47.120794 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:14:47.121381 kubelet[2333]: E0913 00:14:47.121298 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Sep 13 00:14:47.263704 containerd[1576]: time="2025-09-13T00:14:47.263466222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:14:47.263704 containerd[1576]: time="2025-09-13T00:14:47.263594563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:14:47.263704 containerd[1576]: time="2025-09-13T00:14:47.263626098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:47.264471 containerd[1576]: time="2025-09-13T00:14:47.264413124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:47.265341 containerd[1576]: time="2025-09-13T00:14:47.264512806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:14:47.265341 containerd[1576]: time="2025-09-13T00:14:47.264710066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:14:47.265341 containerd[1576]: time="2025-09-13T00:14:47.264755364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:47.265341 containerd[1576]: time="2025-09-13T00:14:47.265050904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:47.271106 containerd[1576]: time="2025-09-13T00:14:47.270978050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:14:47.272849 containerd[1576]: time="2025-09-13T00:14:47.271863015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:14:47.272849 containerd[1576]: time="2025-09-13T00:14:47.271891194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:47.272849 containerd[1576]: time="2025-09-13T00:14:47.272012103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:47.359233 containerd[1576]: time="2025-09-13T00:14:47.359184708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ff9d18690a2d7d900578f62e1ec49fe20c3a19dde3fa7f659cb91ab715ed6dc\"" Sep 13 00:14:47.359854 containerd[1576]: time="2025-09-13T00:14:47.359825072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf3c39a8230f2756adb9abadab92111eaf9b14f674cc844380146e92a707543a\"" Sep 13 00:14:47.360201 containerd[1576]: time="2025-09-13T00:14:47.360179954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:350f4cca0a9add6933e9e537e89729b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd4a3bd9d1f8c13323540a93578a5735673ec25da27336633b831bec2614b8f\"" Sep 13 00:14:47.361749 kubelet[2333]: E0913 00:14:47.361707 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:47.361854 kubelet[2333]: E0913 00:14:47.361836 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:47.361986 kubelet[2333]: E0913 00:14:47.361958 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:47.364234 containerd[1576]: time="2025-09-13T00:14:47.364191178Z" level=info msg="CreateContainer within sandbox \"bf3c39a8230f2756adb9abadab92111eaf9b14f674cc844380146e92a707543a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:14:47.364327 containerd[1576]: time="2025-09-13T00:14:47.364197729Z" level=info msg="CreateContainer within sandbox \"3ff9d18690a2d7d900578f62e1ec49fe20c3a19dde3fa7f659cb91ab715ed6dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:14:47.365150 containerd[1576]: time="2025-09-13T00:14:47.365099924Z" level=info msg="CreateContainer within sandbox \"bbd4a3bd9d1f8c13323540a93578a5735673ec25da27336633b831bec2614b8f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:14:47.395772 containerd[1576]: time="2025-09-13T00:14:47.395692135Z" level=info msg="CreateContainer within sandbox \"bf3c39a8230f2756adb9abadab92111eaf9b14f674cc844380146e92a707543a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5dada9f1e9aff5e80b232e8b0c5ac658e0367dcc66fc2d713559fb42c15a801\"" Sep 13 00:14:47.396577 containerd[1576]: time="2025-09-13T00:14:47.396546157Z" level=info msg="StartContainer for \"f5dada9f1e9aff5e80b232e8b0c5ac658e0367dcc66fc2d713559fb42c15a801\"" Sep 13 00:14:47.404079 containerd[1576]: time="2025-09-13T00:14:47.403999063Z" level=info msg="CreateContainer within sandbox \"bbd4a3bd9d1f8c13323540a93578a5735673ec25da27336633b831bec2614b8f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b82c23b2522c8f5944e813979680c961448bb21ca530e295156c874a3f24055\"" Sep 13 00:14:47.404668 containerd[1576]: time="2025-09-13T00:14:47.404517376Z" level=info msg="StartContainer for \"2b82c23b2522c8f5944e813979680c961448bb21ca530e295156c874a3f24055\"" Sep 13 00:14:47.405892 containerd[1576]: time="2025-09-13T00:14:47.405845426Z" level=info msg="CreateContainer within sandbox \"3ff9d18690a2d7d900578f62e1ec49fe20c3a19dde3fa7f659cb91ab715ed6dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83958e89de5138a1892e410c7a72529ab0fd59c3274515801fe2d8c3fece470f\"" Sep 13 00:14:47.408339 containerd[1576]: time="2025-09-13T00:14:47.406709665Z" level=info msg="StartContainer for \"83958e89de5138a1892e410c7a72529ab0fd59c3274515801fe2d8c3fece470f\"" Sep 13 00:14:47.515471 containerd[1576]: time="2025-09-13T00:14:47.514869837Z" level=info msg="StartContainer for \"2b82c23b2522c8f5944e813979680c961448bb21ca530e295156c874a3f24055\" returns successfully" Sep 13 00:14:47.515471 containerd[1576]: time="2025-09-13T00:14:47.515054445Z" level=info msg="StartContainer for \"f5dada9f1e9aff5e80b232e8b0c5ac658e0367dcc66fc2d713559fb42c15a801\" returns successfully" Sep 13 00:14:47.541904 containerd[1576]: time="2025-09-13T00:14:47.541816954Z" level=info msg="StartContainer for \"83958e89de5138a1892e410c7a72529ab0fd59c3274515801fe2d8c3fece470f\" returns successfully" Sep 13 00:14:47.573376 update_engine[1567]: I20250913 00:14:47.572350 1567 update_attempter.cc:509] Updating boot flags... Sep 13 00:14:47.644339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2611) Sep 13 00:14:47.726351 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2610) Sep 13 00:14:48.149505 kubelet[2333]: E0913 00:14:48.149386 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:48.153143 kubelet[2333]: E0913 00:14:48.153114 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:48.157603 kubelet[2333]: E0913 00:14:48.157436 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:48.723385 kubelet[2333]: I0913 00:14:48.723343 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:14:49.161383 kubelet[2333]: E0913 00:14:49.159283 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:49.507754 kubelet[2333]: E0913 00:14:49.507595 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:14:49.587824 kubelet[2333]: I0913 00:14:49.587741 2333 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:14:49.962448 kubelet[2333]: I0913 00:14:49.962379 2333 apiserver.go:52] "Watching apiserver" Sep 13 00:14:50.014795 kubelet[2333]: I0913 00:14:50.014704 2333 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:14:51.724449 systemd[1]: Reloading requested from client PID 2621 ('systemctl') (unit session-7.scope)... Sep 13 00:14:51.724471 systemd[1]: Reloading... Sep 13 00:14:51.804413 zram_generator::config[2660]: No configuration found. Sep 13 00:14:51.943490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:14:52.033768 systemd[1]: Reloading finished in 308 ms. Sep 13 00:14:52.074612 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:52.074804 kubelet[2333]: E0913 00:14:52.074543 2333 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.1864af498a34bed0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:14:44.967522 +0000 UTC m=+0.743667249,LastTimestamp:2025-09-13 00:14:44.967522 +0000 UTC m=+0.743667249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:14:52.100464 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:14:52.101064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:52.111813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:14:52.324694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:14:52.331703 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:14:52.384073 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:14:52.384073 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:14:52.384073 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:14:52.384598 kubelet[2715]: I0913 00:14:52.384130 2715 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:14:52.398104 kubelet[2715]: I0913 00:14:52.398038 2715 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:14:52.398104 kubelet[2715]: I0913 00:14:52.398089 2715 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:14:52.398466 kubelet[2715]: I0913 00:14:52.398421 2715 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:14:52.399922 kubelet[2715]: I0913 00:14:52.399887 2715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:14:52.403025 kubelet[2715]: I0913 00:14:52.402228 2715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:14:52.406839 kubelet[2715]: E0913 00:14:52.406790 2715 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:14:52.406839 kubelet[2715]: I0913 00:14:52.406831 2715 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:14:52.416372 kubelet[2715]: I0913 00:14:52.416275 2715 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:14:52.417134 kubelet[2715]: I0913 00:14:52.416991 2715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:14:52.417411 kubelet[2715]: I0913 00:14:52.417224 2715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:14:52.417528 kubelet[2715]: I0913 00:14:52.417278 2715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:14:52.417640 kubelet[2715]: I0913 00:14:52.417530 2715 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:14:52.417640 kubelet[2715]: I0913 00:14:52.417543 2715 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:14:52.417640 kubelet[2715]: I0913 00:14:52.417583 2715 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:14:52.417741 kubelet[2715]: I0913 00:14:52.417728 2715 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:14:52.417769 kubelet[2715]: I0913 00:14:52.417747 2715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:14:52.418363 kubelet[2715]: I0913 00:14:52.417815 2715 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:14:52.418363 kubelet[2715]: I0913 00:14:52.417835 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:14:52.418584 kubelet[2715]: I0913 00:14:52.418553 2715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:14:52.419011 kubelet[2715]: I0913 00:14:52.418985 2715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:14:52.419470 kubelet[2715]: I0913 00:14:52.419450 2715 server.go:1274] "Started kubelet" Sep 13 00:14:52.420559 kubelet[2715]: I0913 00:14:52.420468 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:14:52.423652 kubelet[2715]: I0913 00:14:52.423563 2715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:14:52.423652 kubelet[2715]: I0913 00:14:52.423638 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:14:52.423797 kubelet[2715]: I0913 00:14:52.423776 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:14:52.424636 kubelet[2715]: I0913 00:14:52.424601 2715 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:14:52.425759 kubelet[2715]: I0913 00:14:52.425653 2715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:14:52.431382 kubelet[2715]: I0913 00:14:52.430864 2715 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:14:52.431382 kubelet[2715]: I0913 00:14:52.431245 2715 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:14:52.431544 kubelet[2715]: I0913 00:14:52.431402 2715 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:14:52.431544 kubelet[2715]: I0913 00:14:52.431507 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:14:52.432160 kubelet[2715]: I0913 00:14:52.431911 2715 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:14:52.434167 kubelet[2715]: E0913 00:14:52.434129 2715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:14:52.434839 kubelet[2715]: I0913 00:14:52.434714 2715 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:14:52.450645 kubelet[2715]: I0913 00:14:52.450585 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:14:52.453408 kubelet[2715]: I0913 00:14:52.452800 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:14:52.453408 kubelet[2715]: I0913 00:14:52.452831 2715 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:14:52.453408 kubelet[2715]: I0913 00:14:52.452875 2715 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:14:52.453408 kubelet[2715]: E0913 00:14:52.453242 2715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:14:52.491819 kubelet[2715]: I0913 00:14:52.491772 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:14:52.491819 kubelet[2715]: I0913 00:14:52.491798 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:14:52.491819 kubelet[2715]: I0913 00:14:52.491823 2715 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:14:52.492065 kubelet[2715]: I0913 00:14:52.492026 2715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:14:52.492151 kubelet[2715]: I0913 00:14:52.492055 2715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:14:52.492151 kubelet[2715]: I0913 00:14:52.492076 2715 policy_none.go:49] "None policy: Start" Sep 13 00:14:52.493522 kubelet[2715]: I0913 00:14:52.493367 2715 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:14:52.495424 kubelet[2715]: I0913 00:14:52.495407 2715 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:14:52.495786 kubelet[2715]: I0913 00:14:52.495768 2715 state_mem.go:75] "Updated machine memory state" Sep 13 00:14:52.497978 kubelet[2715]: I0913 00:14:52.497957 2715 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:14:52.498895 kubelet[2715]: I0913 00:14:52.498858 2715 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:14:52.498987 kubelet[2715]: I0913 00:14:52.498905 2715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:14:52.499362 kubelet[2715]: I0913 00:14:52.499338 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:14:52.563874 kubelet[2715]: E0913 00:14:52.563702 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:52.607911 kubelet[2715]: I0913 00:14:52.607719 2715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:14:52.733072 kubelet[2715]: I0913 00:14:52.733015 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/350f4cca0a9add6933e9e537e89729b4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"350f4cca0a9add6933e9e537e89729b4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:52.733072 kubelet[2715]: I0913 00:14:52.733064 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/350f4cca0a9add6933e9e537e89729b4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"350f4cca0a9add6933e9e537e89729b4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:52.733072 kubelet[2715]: I0913 00:14:52.733092 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:52.733307 kubelet[2715]: I0913 00:14:52.733111 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:52.733307 kubelet[2715]: I0913 00:14:52.733198 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:14:52.733307 kubelet[2715]: I0913 00:14:52.733265 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/350f4cca0a9add6933e9e537e89729b4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"350f4cca0a9add6933e9e537e89729b4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:52.733307 kubelet[2715]: I0913 00:14:52.733285 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:52.733425 kubelet[2715]: I0913 00:14:52.733308 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:52.733425 kubelet[2715]: I0913 00:14:52.733364 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:14:52.858486 kubelet[2715]: I0913 00:14:52.858345 2715 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:14:52.858486 kubelet[2715]: I0913 00:14:52.858441 2715 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:14:52.864664 kubelet[2715]: E0913 00:14:52.864625 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:52.864664 kubelet[2715]: E0913 00:14:52.864666 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:52.864855 kubelet[2715]: E0913 00:14:52.864625 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:53.418769 kubelet[2715]: I0913 00:14:53.418725 2715 apiserver.go:52] "Watching apiserver" Sep 13 00:14:53.431872 kubelet[2715]: I0913 00:14:53.431812 2715 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:14:53.464545 kubelet[2715]: E0913 00:14:53.464357 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:53.464545 kubelet[2715]: E0913 00:14:53.464407 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:53.492083 kubelet[2715]: E0913 00:14:53.492025 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:14:53.492368 kubelet[2715]: E0913 00:14:53.492255 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:53.860393 kubelet[2715]: I0913 00:14:53.860006 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8599700449999998 podStartE2EDuration="1.859970045s" podCreationTimestamp="2025-09-13 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:14:53.815074954 +0000 UTC m=+1.478003471" watchObservedRunningTime="2025-09-13 00:14:53.859970045 +0000 UTC m=+1.522898562" Sep 13 00:14:53.874686 kubelet[2715]: I0913 00:14:53.873780 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8737561390000002 podStartE2EDuration="1.873756139s" podCreationTimestamp="2025-09-13 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:14:53.860558547 +0000 UTC m=+1.523487064" watchObservedRunningTime="2025-09-13 00:14:53.873756139 +0000 UTC m=+1.536684656" Sep 13 00:14:53.878337 kubelet[2715]: I0913 00:14:53.878262 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.877826303 podStartE2EDuration="1.877826303s" podCreationTimestamp="2025-09-13 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:14:53.868178523 +0000 UTC m=+1.531107040" watchObservedRunningTime="2025-09-13 00:14:53.877826303 +0000 UTC m=+1.540754820" Sep 13 00:14:54.466447 kubelet[2715]: E0913 00:14:54.466375 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:56.065429 kubelet[2715]: E0913 00:14:56.065350 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:57.640892 kubelet[2715]: I0913 00:14:57.640820 2715 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:14:57.641664 containerd[1576]: time="2025-09-13T00:14:57.641414820Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:14:57.642107 kubelet[2715]: I0913 00:14:57.641740 2715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:14:58.174292 kubelet[2715]: E0913 00:14:58.174239 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:58.474225 kubelet[2715]: E0913 00:14:58.474001 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:58.574089 kubelet[2715]: I0913 00:14:58.574040 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f985a67-1173-4ae9-a635-ab796347f81a-kube-proxy\") pod \"kube-proxy-hk85n\" (UID: \"4f985a67-1173-4ae9-a635-ab796347f81a\") " pod="kube-system/kube-proxy-hk85n" Sep 13 00:14:58.574089 kubelet[2715]: I0913 00:14:58.574081 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f985a67-1173-4ae9-a635-ab796347f81a-xtables-lock\") pod \"kube-proxy-hk85n\" (UID: \"4f985a67-1173-4ae9-a635-ab796347f81a\") " pod="kube-system/kube-proxy-hk85n" Sep 13 00:14:58.574296 kubelet[2715]: I0913 00:14:58.574117 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f985a67-1173-4ae9-a635-ab796347f81a-lib-modules\") pod \"kube-proxy-hk85n\" (UID: \"4f985a67-1173-4ae9-a635-ab796347f81a\") " pod="kube-system/kube-proxy-hk85n" Sep 13 00:14:58.574296 kubelet[2715]: I0913 00:14:58.574142 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf2sh\" (UniqueName: \"kubernetes.io/projected/4f985a67-1173-4ae9-a635-ab796347f81a-kube-api-access-xf2sh\") pod \"kube-proxy-hk85n\" (UID: \"4f985a67-1173-4ae9-a635-ab796347f81a\") " pod="kube-system/kube-proxy-hk85n" Sep 13 00:14:58.681592 kubelet[2715]: E0913 00:14:58.681525 2715 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:14:58.681592 kubelet[2715]: E0913 00:14:58.681574 2715 projected.go:194] Error preparing data for projected volume kube-api-access-xf2sh for pod kube-system/kube-proxy-hk85n: configmap "kube-root-ca.crt" not found Sep 13 00:14:58.682199 kubelet[2715]: E0913 00:14:58.681678 2715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f985a67-1173-4ae9-a635-ab796347f81a-kube-api-access-xf2sh podName:4f985a67-1173-4ae9-a635-ab796347f81a nodeName:}" failed. No retries permitted until 2025-09-13 00:14:59.181628879 +0000 UTC m=+6.844557396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xf2sh" (UniqueName: "kubernetes.io/projected/4f985a67-1173-4ae9-a635-ab796347f81a-kube-api-access-xf2sh") pod "kube-proxy-hk85n" (UID: "4f985a67-1173-4ae9-a635-ab796347f81a") : configmap "kube-root-ca.crt" not found Sep 13 00:14:58.875776 kubelet[2715]: I0913 00:14:58.875640 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4656z\" (UniqueName: \"kubernetes.io/projected/1c15da79-d841-49f9-a72c-6944c0189f8c-kube-api-access-4656z\") pod \"tigera-operator-58fc44c59b-xq9jc\" (UID: \"1c15da79-d841-49f9-a72c-6944c0189f8c\") " pod="tigera-operator/tigera-operator-58fc44c59b-xq9jc" Sep 13 00:14:58.875776 kubelet[2715]: I0913 00:14:58.875697 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c15da79-d841-49f9-a72c-6944c0189f8c-var-lib-calico\") pod \"tigera-operator-58fc44c59b-xq9jc\" (UID: \"1c15da79-d841-49f9-a72c-6944c0189f8c\") " pod="tigera-operator/tigera-operator-58fc44c59b-xq9jc" Sep 13 00:14:59.172156 containerd[1576]: time="2025-09-13T00:14:59.172102012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xq9jc,Uid:1c15da79-d841-49f9-a72c-6944c0189f8c,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:14:59.390860 kubelet[2715]: E0913 00:14:59.390812 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:59.391636 containerd[1576]: time="2025-09-13T00:14:59.391237648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk85n,Uid:4f985a67-1173-4ae9-a635-ab796347f81a,Namespace:kube-system,Attempt:0,}" Sep 13 00:14:59.482251 containerd[1576]: time="2025-09-13T00:14:59.481988012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:14:59.482251 containerd[1576]: time="2025-09-13T00:14:59.482082062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:14:59.482251 containerd[1576]: time="2025-09-13T00:14:59.482095056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:59.482251 containerd[1576]: time="2025-09-13T00:14:59.482226333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:59.491643 containerd[1576]: time="2025-09-13T00:14:59.491359632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:14:59.491643 containerd[1576]: time="2025-09-13T00:14:59.491427625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:14:59.491643 containerd[1576]: time="2025-09-13T00:14:59.491442081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:59.491643 containerd[1576]: time="2025-09-13T00:14:59.491561527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:59.553706 containerd[1576]: time="2025-09-13T00:14:59.553644083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk85n,Uid:4f985a67-1173-4ae9-a635-ab796347f81a,Namespace:kube-system,Attempt:0,} returns sandbox id \"20541b31746f535f5650a0bc9446899b79dd64193e9d1955e068201f1c6bc0dc\"" Sep 13 00:14:59.555662 kubelet[2715]: E0913 00:14:59.554542 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:14:59.556623 containerd[1576]: time="2025-09-13T00:14:59.556521498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xq9jc,Uid:1c15da79-d841-49f9-a72c-6944c0189f8c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5cfeabf8731bf6c356fe95e38fbd79ff174a1726ae1d40523c04895e640d3a2f\"" Sep 13 00:14:59.559463 containerd[1576]: time="2025-09-13T00:14:59.559431211Z" level=info msg="CreateContainer within sandbox \"20541b31746f535f5650a0bc9446899b79dd64193e9d1955e068201f1c6bc0dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:14:59.559511 containerd[1576]: time="2025-09-13T00:14:59.559473277Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:14:59.581527 containerd[1576]: time="2025-09-13T00:14:59.581445106Z" level=info msg="CreateContainer within sandbox \"20541b31746f535f5650a0bc9446899b79dd64193e9d1955e068201f1c6bc0dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72d007878d725c2d9b2e87d177c3c0810bbd11c90d06e89bc8e3f46d2cd72322\"" Sep 13 00:14:59.582296 containerd[1576]: time="2025-09-13T00:14:59.582262141Z" level=info msg="StartContainer for \"72d007878d725c2d9b2e87d177c3c0810bbd11c90d06e89bc8e3f46d2cd72322\"" Sep 13 00:14:59.803515 containerd[1576]: time="2025-09-13T00:14:59.803300142Z" level=info msg="StartContainer for \"72d007878d725c2d9b2e87d177c3c0810bbd11c90d06e89bc8e3f46d2cd72322\" returns successfully" Sep 13 00:15:00.478857 kubelet[2715]: E0913 00:15:00.478748 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:01.096955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752851234.mount: Deactivated successfully. Sep 13 00:15:01.532479 containerd[1576]: time="2025-09-13T00:15:01.532410209Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:01.535683 containerd[1576]: time="2025-09-13T00:15:01.535639086Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:15:01.575151 containerd[1576]: time="2025-09-13T00:15:01.575079447Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:01.577601 containerd[1576]: time="2025-09-13T00:15:01.577557282Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:01.578238 containerd[1576]: time="2025-09-13T00:15:01.578193667Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.018677191s" Sep 13 00:15:01.578344 containerd[1576]: time="2025-09-13T00:15:01.578226366Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:15:01.580285 containerd[1576]: time="2025-09-13T00:15:01.580255397Z" level=info msg="CreateContainer within sandbox \"5cfeabf8731bf6c356fe95e38fbd79ff174a1726ae1d40523c04895e640d3a2f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:15:01.597187 containerd[1576]: time="2025-09-13T00:15:01.597117552Z" level=info msg="CreateContainer within sandbox \"5cfeabf8731bf6c356fe95e38fbd79ff174a1726ae1d40523c04895e640d3a2f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"13c6556114ed1e1295c28813be6656f85aebe22729e56f67c6aab49ad790d819\"" Sep 13 00:15:01.597772 containerd[1576]: time="2025-09-13T00:15:01.597734681Z" level=info msg="StartContainer for \"13c6556114ed1e1295c28813be6656f85aebe22729e56f67c6aab49ad790d819\"" Sep 13 00:15:01.666409 containerd[1576]: time="2025-09-13T00:15:01.666357317Z" level=info msg="StartContainer for \"13c6556114ed1e1295c28813be6656f85aebe22729e56f67c6aab49ad790d819\" returns successfully" Sep 13 00:15:01.947298 kubelet[2715]: E0913 00:15:01.947235 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:01.968203 kubelet[2715]: I0913 00:15:01.968094 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hk85n" podStartSLOduration=3.9680683119999998 podStartE2EDuration="3.968068312s" podCreationTimestamp="2025-09-13 00:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:15:00.516191188 +0000 UTC m=+8.179119705" watchObservedRunningTime="2025-09-13 00:15:01.968068312 +0000 UTC m=+9.630996839" Sep 13 00:15:02.484463 kubelet[2715]: E0913 00:15:02.484398 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:02.538300 kubelet[2715]: I0913 00:15:02.535588 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-xq9jc" podStartSLOduration=2.514773903 podStartE2EDuration="4.535558535s" podCreationTimestamp="2025-09-13 00:14:58 +0000 UTC" firstStartedPulling="2025-09-13 00:14:59.558212291 +0000 UTC m=+7.221140808" lastFinishedPulling="2025-09-13 00:15:01.578996923 +0000 UTC m=+9.241925440" observedRunningTime="2025-09-13 00:15:02.535473942 +0000 UTC m=+10.198402459" watchObservedRunningTime="2025-09-13 00:15:02.535558535 +0000 UTC m=+10.198487052" Sep 13 00:15:06.071269 kubelet[2715]: E0913 00:15:06.071209 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:07.679942 sudo[1784]: pam_unix(sudo:session): session closed for user root Sep 13 00:15:07.684276 sshd[1775]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:07.688513 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:46222.service: Deactivated successfully. Sep 13 00:15:07.699502 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:15:07.700616 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:15:07.703414 systemd-logind[1564]: Removed session 7. Sep 13 00:15:10.458721 kubelet[2715]: I0913 00:15:10.458658 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa053b09-ce71-422a-87cd-0c8038cc258a-tigera-ca-bundle\") pod \"calico-typha-57f98f5c4c-pz27p\" (UID: \"fa053b09-ce71-422a-87cd-0c8038cc258a\") " pod="calico-system/calico-typha-57f98f5c4c-pz27p" Sep 13 00:15:10.458721 kubelet[2715]: I0913 00:15:10.458708 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgz5q\" (UniqueName: \"kubernetes.io/projected/fa053b09-ce71-422a-87cd-0c8038cc258a-kube-api-access-hgz5q\") pod \"calico-typha-57f98f5c4c-pz27p\" (UID: \"fa053b09-ce71-422a-87cd-0c8038cc258a\") " pod="calico-system/calico-typha-57f98f5c4c-pz27p" Sep 13 00:15:10.458721 kubelet[2715]: I0913 00:15:10.458731 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fa053b09-ce71-422a-87cd-0c8038cc258a-typha-certs\") pod \"calico-typha-57f98f5c4c-pz27p\" (UID: \"fa053b09-ce71-422a-87cd-0c8038cc258a\") " pod="calico-system/calico-typha-57f98f5c4c-pz27p" Sep 13 00:15:10.724905 kubelet[2715]: E0913 00:15:10.724572 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:10.725852 containerd[1576]: time="2025-09-13T00:15:10.725810697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f98f5c4c-pz27p,Uid:fa053b09-ce71-422a-87cd-0c8038cc258a,Namespace:calico-system,Attempt:0,}" Sep 13 00:15:10.761988 containerd[1576]: time="2025-09-13T00:15:10.761845436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:10.761988 containerd[1576]: time="2025-09-13T00:15:10.761924071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:10.761988 containerd[1576]: time="2025-09-13T00:15:10.761944229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:10.762518 containerd[1576]: time="2025-09-13T00:15:10.762083375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:10.844812 containerd[1576]: time="2025-09-13T00:15:10.844755321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57f98f5c4c-pz27p,Uid:fa053b09-ce71-422a-87cd-0c8038cc258a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dda4bbdcbb6488145e1251fc5a56096f4e46863181f094f064fd95589245af3\"" Sep 13 00:15:10.845692 kubelet[2715]: E0913 00:15:10.845650 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:10.846951 containerd[1576]: time="2025-09-13T00:15:10.846928072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:15:10.861487 kubelet[2715]: I0913 00:15:10.861362 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-var-lib-calico\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861487 kubelet[2715]: I0913 00:15:10.861405 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-var-run-calico\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861487 kubelet[2715]: I0913 00:15:10.861423 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-xtables-lock\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861487 kubelet[2715]: I0913 00:15:10.861473 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-cni-log-dir\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861487 kubelet[2715]: I0913 00:15:10.861490 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-flexvol-driver-host\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861855 kubelet[2715]: I0913 00:15:10.861511 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-lib-modules\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861855 kubelet[2715]: I0913 00:15:10.861524 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54c9x\" (UniqueName: \"kubernetes.io/projected/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-kube-api-access-54c9x\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861855 kubelet[2715]: I0913 00:15:10.861541 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-cni-bin-dir\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861855 kubelet[2715]: I0913 00:15:10.861556 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-cni-net-dir\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.861855 kubelet[2715]: I0913 00:15:10.861573 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-node-certs\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.862059 kubelet[2715]: I0913 00:15:10.861599 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-policysync\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.862059 kubelet[2715]: I0913 00:15:10.861615 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bc41ed7-9404-4fb9-933e-6fc773ea2b18-tigera-ca-bundle\") pod \"calico-node-2zgbl\" (UID: \"0bc41ed7-9404-4fb9-933e-6fc773ea2b18\") " pod="calico-system/calico-node-2zgbl" Sep 13 00:15:10.966024 kubelet[2715]: E0913 00:15:10.965433 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:10.966024 kubelet[2715]: W0913 00:15:10.965466 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:10.966024 kubelet[2715]: E0913 00:15:10.965499 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:10.969049 kubelet[2715]: E0913 00:15:10.968993 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:10.969049 kubelet[2715]: W0913 00:15:10.969027 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:10.969049 kubelet[2715]: E0913 00:15:10.969052 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:10.972931 kubelet[2715]: E0913 00:15:10.972894 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:10.972931 kubelet[2715]: W0913 00:15:10.972915 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:10.972931 kubelet[2715]: E0913 00:15:10.972937 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.094371 kubelet[2715]: E0913 00:15:11.094177 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:11.119789 containerd[1576]: time="2025-09-13T00:15:11.119717205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2zgbl,Uid:0bc41ed7-9404-4fb9-933e-6fc773ea2b18,Namespace:calico-system,Attempt:0,}" Sep 13 00:15:11.154111 containerd[1576]: time="2025-09-13T00:15:11.153053218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:11.154111 containerd[1576]: time="2025-09-13T00:15:11.153831673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:11.154111 containerd[1576]: time="2025-09-13T00:15:11.153843915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:11.154111 containerd[1576]: time="2025-09-13T00:15:11.153959329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:11.155686 kubelet[2715]: E0913 00:15:11.155441 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.155686 kubelet[2715]: W0913 00:15:11.155473 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.155686 kubelet[2715]: E0913 00:15:11.155504 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.155896 kubelet[2715]: E0913 00:15:11.155871 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.155896 kubelet[2715]: W0913 00:15:11.155888 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.155988 kubelet[2715]: E0913 00:15:11.155906 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.156193 kubelet[2715]: E0913 00:15:11.156175 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.156193 kubelet[2715]: W0913 00:15:11.156190 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.156299 kubelet[2715]: E0913 00:15:11.156203 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.156478 kubelet[2715]: E0913 00:15:11.156459 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.156478 kubelet[2715]: W0913 00:15:11.156477 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.156562 kubelet[2715]: E0913 00:15:11.156490 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.156767 kubelet[2715]: E0913 00:15:11.156745 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.156767 kubelet[2715]: W0913 00:15:11.156760 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.156870 kubelet[2715]: E0913 00:15:11.156773 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.157023 kubelet[2715]: E0913 00:15:11.157007 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.157023 kubelet[2715]: W0913 00:15:11.157020 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.157121 kubelet[2715]: E0913 00:15:11.157031 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.157290 kubelet[2715]: E0913 00:15:11.157274 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.157290 kubelet[2715]: W0913 00:15:11.157289 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.157427 kubelet[2715]: E0913 00:15:11.157302 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.157586 kubelet[2715]: E0913 00:15:11.157569 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.157586 kubelet[2715]: W0913 00:15:11.157585 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.157681 kubelet[2715]: E0913 00:15:11.157598 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.158000 kubelet[2715]: E0913 00:15:11.157865 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.158000 kubelet[2715]: W0913 00:15:11.157882 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.158000 kubelet[2715]: E0913 00:15:11.157896 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.158222 kubelet[2715]: E0913 00:15:11.158205 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.158309 kubelet[2715]: W0913 00:15:11.158292 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.158541 kubelet[2715]: E0913 00:15:11.158419 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.158838 kubelet[2715]: E0913 00:15:11.158693 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.158838 kubelet[2715]: W0913 00:15:11.158708 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.158838 kubelet[2715]: E0913 00:15:11.158722 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.159036 kubelet[2715]: E0913 00:15:11.159020 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.159135 kubelet[2715]: W0913 00:15:11.159117 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.159222 kubelet[2715]: E0913 00:15:11.159205 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.159716 kubelet[2715]: E0913 00:15:11.159582 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.159716 kubelet[2715]: W0913 00:15:11.159597 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.159716 kubelet[2715]: E0913 00:15:11.159610 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.159924 kubelet[2715]: E0913 00:15:11.159910 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.159991 kubelet[2715]: W0913 00:15:11.159978 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.160161 kubelet[2715]: E0913 00:15:11.160044 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.160457 kubelet[2715]: E0913 00:15:11.160299 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.160457 kubelet[2715]: W0913 00:15:11.160334 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.160457 kubelet[2715]: E0913 00:15:11.160348 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.160660 kubelet[2715]: E0913 00:15:11.160646 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.160726 kubelet[2715]: W0913 00:15:11.160714 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.160782 kubelet[2715]: E0913 00:15:11.160772 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.161053 kubelet[2715]: E0913 00:15:11.161041 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.161214 kubelet[2715]: W0913 00:15:11.161118 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.161214 kubelet[2715]: E0913 00:15:11.161133 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.161403 kubelet[2715]: E0913 00:15:11.161387 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.161481 kubelet[2715]: W0913 00:15:11.161467 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.161549 kubelet[2715]: E0913 00:15:11.161536 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.161866 kubelet[2715]: E0913 00:15:11.161851 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.162046 kubelet[2715]: W0913 00:15:11.161926 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.162046 kubelet[2715]: E0913 00:15:11.161944 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.162234 kubelet[2715]: E0913 00:15:11.162219 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.162307 kubelet[2715]: W0913 00:15:11.162292 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.162414 kubelet[2715]: E0913 00:15:11.162399 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.164536 kubelet[2715]: E0913 00:15:11.164515 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.164795 kubelet[2715]: W0913 00:15:11.164631 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.164795 kubelet[2715]: E0913 00:15:11.164651 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.164795 kubelet[2715]: I0913 00:15:11.164684 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aadb0960-6652-43de-9a7a-a3600967a9bb-kubelet-dir\") pod \"csi-node-driver-lxtsh\" (UID: \"aadb0960-6652-43de-9a7a-a3600967a9bb\") " pod="calico-system/csi-node-driver-lxtsh" Sep 13 00:15:11.165049 kubelet[2715]: E0913 00:15:11.165014 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.165049 kubelet[2715]: W0913 00:15:11.165031 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.165379 kubelet[2715]: E0913 00:15:11.165172 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.165379 kubelet[2715]: I0913 00:15:11.165196 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aadb0960-6652-43de-9a7a-a3600967a9bb-socket-dir\") pod \"csi-node-driver-lxtsh\" (UID: \"aadb0960-6652-43de-9a7a-a3600967a9bb\") " pod="calico-system/csi-node-driver-lxtsh" Sep 13 00:15:11.165582 kubelet[2715]: E0913 00:15:11.165565 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.165671 kubelet[2715]: W0913 00:15:11.165641 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.165871 kubelet[2715]: E0913 00:15:11.165842 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.166141 kubelet[2715]: E0913 00:15:11.165981 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.166141 kubelet[2715]: W0913 00:15:11.165993 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.166141 kubelet[2715]: E0913 00:15:11.166012 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.166372 kubelet[2715]: E0913 00:15:11.166350 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.166447 kubelet[2715]: W0913 00:15:11.166431 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.166547 kubelet[2715]: E0913 00:15:11.166531 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.166785 kubelet[2715]: I0913 00:15:11.166759 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76t9c\" (UniqueName: \"kubernetes.io/projected/aadb0960-6652-43de-9a7a-a3600967a9bb-kube-api-access-76t9c\") pod \"csi-node-driver-lxtsh\" (UID: \"aadb0960-6652-43de-9a7a-a3600967a9bb\") " pod="calico-system/csi-node-driver-lxtsh" Sep 13 00:15:11.168353 kubelet[2715]: E0913 00:15:11.167608 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.168353 kubelet[2715]: W0913 00:15:11.167646 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.168353 kubelet[2715]: E0913 00:15:11.167685 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.168353 kubelet[2715]: E0913 00:15:11.168039 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.168353 kubelet[2715]: W0913 00:15:11.168050 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.168353 kubelet[2715]: E0913 00:15:11.168073 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.168579 kubelet[2715]: E0913 00:15:11.168369 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.168579 kubelet[2715]: W0913 00:15:11.168382 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.168579 kubelet[2715]: E0913 00:15:11.168398 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.170377 kubelet[2715]: E0913 00:15:11.168769 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.170377 kubelet[2715]: W0913 00:15:11.168786 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.170377 kubelet[2715]: E0913 00:15:11.168798 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.170377 kubelet[2715]: E0913 00:15:11.169165 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.170377 kubelet[2715]: W0913 00:15:11.169182 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.170377 kubelet[2715]: E0913 00:15:11.169194 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.170377 kubelet[2715]: I0913 00:15:11.169273 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aadb0960-6652-43de-9a7a-a3600967a9bb-registration-dir\") pod \"csi-node-driver-lxtsh\" (UID: \"aadb0960-6652-43de-9a7a-a3600967a9bb\") " pod="calico-system/csi-node-driver-lxtsh" Sep 13 00:15:11.171558 kubelet[2715]: E0913 00:15:11.171523 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.171558 kubelet[2715]: W0913 00:15:11.171550 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.171664 kubelet[2715]: E0913 00:15:11.171571 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.172651 kubelet[2715]: E0913 00:15:11.171819 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.172651 kubelet[2715]: W0913 00:15:11.171837 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.172651 kubelet[2715]: E0913 00:15:11.171993 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.174541 kubelet[2715]: E0913 00:15:11.173985 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.174541 kubelet[2715]: W0913 00:15:11.174005 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.174541 kubelet[2715]: E0913 00:15:11.174022 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.174973 kubelet[2715]: I0913 00:15:11.174934 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aadb0960-6652-43de-9a7a-a3600967a9bb-varrun\") pod \"csi-node-driver-lxtsh\" (UID: \"aadb0960-6652-43de-9a7a-a3600967a9bb\") " pod="calico-system/csi-node-driver-lxtsh" Sep 13 00:15:11.175948 kubelet[2715]: E0913 00:15:11.175493 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.175948 kubelet[2715]: W0913 00:15:11.175799 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.175948 kubelet[2715]: E0913 00:15:11.175816 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.176602 kubelet[2715]: E0913 00:15:11.176572 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.176602 kubelet[2715]: W0913 00:15:11.176589 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.176602 kubelet[2715]: E0913 00:15:11.176601 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.208560 containerd[1576]: time="2025-09-13T00:15:11.208500024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2zgbl,Uid:0bc41ed7-9404-4fb9-933e-6fc773ea2b18,Namespace:calico-system,Attempt:0,} returns sandbox id \"93ec7d90b884d9747b07e72ef2661c32a0e0bfdb7baae17824d7208b22823db0\"" Sep 13 00:15:11.277457 kubelet[2715]: E0913 00:15:11.277398 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.277457 kubelet[2715]: W0913 00:15:11.277424 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.277457 kubelet[2715]: E0913 00:15:11.277452 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.277860 kubelet[2715]: E0913 00:15:11.277829 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.277906 kubelet[2715]: W0913 00:15:11.277858 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.277906 kubelet[2715]: E0913 00:15:11.277893 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.278216 kubelet[2715]: E0913 00:15:11.278196 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.278216 kubelet[2715]: W0913 00:15:11.278212 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.278339 kubelet[2715]: E0913 00:15:11.278234 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.278577 kubelet[2715]: E0913 00:15:11.278551 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.278577 kubelet[2715]: W0913 00:15:11.278567 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.278577 kubelet[2715]: E0913 00:15:11.278580 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.278868 kubelet[2715]: E0913 00:15:11.278837 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.278868 kubelet[2715]: W0913 00:15:11.278849 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.278868 kubelet[2715]: E0913 00:15:11.278860 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.279123 kubelet[2715]: E0913 00:15:11.279100 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.279123 kubelet[2715]: W0913 00:15:11.279119 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.279207 kubelet[2715]: E0913 00:15:11.279143 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.279502 kubelet[2715]: E0913 00:15:11.279483 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.279502 kubelet[2715]: W0913 00:15:11.279497 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.279613 kubelet[2715]: E0913 00:15:11.279530 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.279768 kubelet[2715]: E0913 00:15:11.279751 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.279768 kubelet[2715]: W0913 00:15:11.279765 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.279887 kubelet[2715]: E0913 00:15:11.279806 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.279990 kubelet[2715]: E0913 00:15:11.279973 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.279990 kubelet[2715]: W0913 00:15:11.279984 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.280071 kubelet[2715]: E0913 00:15:11.280020 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.280204 kubelet[2715]: E0913 00:15:11.280184 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.280204 kubelet[2715]: W0913 00:15:11.280195 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.280269 kubelet[2715]: E0913 00:15:11.280222 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.280412 kubelet[2715]: E0913 00:15:11.280393 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.280412 kubelet[2715]: W0913 00:15:11.280403 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.280488 kubelet[2715]: E0913 00:15:11.280417 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.280619 kubelet[2715]: E0913 00:15:11.280598 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.280619 kubelet[2715]: W0913 00:15:11.280612 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.280684 kubelet[2715]: E0913 00:15:11.280626 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.280889 kubelet[2715]: E0913 00:15:11.280869 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.280889 kubelet[2715]: W0913 00:15:11.280880 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.280955 kubelet[2715]: E0913 00:15:11.280892 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.281073 kubelet[2715]: E0913 00:15:11.281056 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.281073 kubelet[2715]: W0913 00:15:11.281067 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.281137 kubelet[2715]: E0913 00:15:11.281079 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.281325 kubelet[2715]: E0913 00:15:11.281295 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.281387 kubelet[2715]: W0913 00:15:11.281309 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.281387 kubelet[2715]: E0913 00:15:11.281364 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.281524 kubelet[2715]: E0913 00:15:11.281507 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.281524 kubelet[2715]: W0913 00:15:11.281519 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.281602 kubelet[2715]: E0913 00:15:11.281550 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.281712 kubelet[2715]: E0913 00:15:11.281694 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.281712 kubelet[2715]: W0913 00:15:11.281708 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.281790 kubelet[2715]: E0913 00:15:11.281743 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.281925 kubelet[2715]: E0913 00:15:11.281907 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.281925 kubelet[2715]: W0913 00:15:11.281921 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.281988 kubelet[2715]: E0913 00:15:11.281938 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.282136 kubelet[2715]: E0913 00:15:11.282120 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.282136 kubelet[2715]: W0913 00:15:11.282131 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.282223 kubelet[2715]: E0913 00:15:11.282145 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.282446 kubelet[2715]: E0913 00:15:11.282423 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.282446 kubelet[2715]: W0913 00:15:11.282437 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.282535 kubelet[2715]: E0913 00:15:11.282456 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.282726 kubelet[2715]: E0913 00:15:11.282703 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.282726 kubelet[2715]: W0913 00:15:11.282715 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.282806 kubelet[2715]: E0913 00:15:11.282733 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.282973 kubelet[2715]: E0913 00:15:11.282952 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.282973 kubelet[2715]: W0913 00:15:11.282965 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.283041 kubelet[2715]: E0913 00:15:11.282983 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.283323 kubelet[2715]: E0913 00:15:11.283275 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.283323 kubelet[2715]: W0913 00:15:11.283289 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.283323 kubelet[2715]: E0913 00:15:11.283307 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.283570 kubelet[2715]: E0913 00:15:11.283544 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.283570 kubelet[2715]: W0913 00:15:11.283559 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.283570 kubelet[2715]: E0913 00:15:11.283569 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.289154 kubelet[2715]: E0913 00:15:11.289084 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.289154 kubelet[2715]: W0913 00:15:11.289121 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.289154 kubelet[2715]: E0913 00:15:11.289140 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:11.296035 kubelet[2715]: E0913 00:15:11.295979 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:11.296035 kubelet[2715]: W0913 00:15:11.296024 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:11.296035 kubelet[2715]: E0913 00:15:11.296048 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:12.453507 kubelet[2715]: E0913 00:15:12.453427 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:13.150211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480434330.mount: Deactivated successfully. Sep 13 00:15:14.421809 containerd[1576]: time="2025-09-13T00:15:14.421714405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:14.422663 containerd[1576]: time="2025-09-13T00:15:14.422608358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:15:14.424329 containerd[1576]: time="2025-09-13T00:15:14.424279798Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:14.428079 containerd[1576]: time="2025-09-13T00:15:14.428023549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:14.428867 containerd[1576]: time="2025-09-13T00:15:14.428818650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.581704975s" Sep 13 00:15:14.428867 containerd[1576]: time="2025-09-13T00:15:14.428869895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:15:14.430180 containerd[1576]: time="2025-09-13T00:15:14.430111841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:15:14.453817 kubelet[2715]: E0913 00:15:14.453733 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:14.454808 containerd[1576]: time="2025-09-13T00:15:14.454745994Z" level=info msg="CreateContainer within sandbox \"0dda4bbdcbb6488145e1251fc5a56096f4e46863181f094f064fd95589245af3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:15:14.477330 containerd[1576]: time="2025-09-13T00:15:14.477257814Z" level=info msg="CreateContainer within sandbox \"0dda4bbdcbb6488145e1251fc5a56096f4e46863181f094f064fd95589245af3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e1e009d8e1eb3ac70e2f2a5faf85635887773ae389f74b84dc2754b35585907\"" Sep 13 00:15:14.481363 containerd[1576]: time="2025-09-13T00:15:14.481281823Z" level=info msg="StartContainer for \"7e1e009d8e1eb3ac70e2f2a5faf85635887773ae389f74b84dc2754b35585907\"" Sep 13 00:15:14.646297 containerd[1576]: time="2025-09-13T00:15:14.646227435Z" level=info msg="StartContainer for \"7e1e009d8e1eb3ac70e2f2a5faf85635887773ae389f74b84dc2754b35585907\" returns successfully" Sep 13 00:15:15.525273 kubelet[2715]: E0913 00:15:15.525202 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:15.547174 kubelet[2715]: I0913 00:15:15.546679 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57f98f5c4c-pz27p" podStartSLOduration=1.963332361 podStartE2EDuration="5.546655914s" podCreationTimestamp="2025-09-13 00:15:10 +0000 UTC" firstStartedPulling="2025-09-13 00:15:10.846537884 +0000 UTC m=+18.509466401" lastFinishedPulling="2025-09-13 00:15:14.429861437 +0000 UTC m=+22.092789954" observedRunningTime="2025-09-13 00:15:15.546341322 +0000 UTC m=+23.209269840" watchObservedRunningTime="2025-09-13 00:15:15.546655914 +0000 UTC m=+23.209584432" Sep 13 00:15:15.594962 kubelet[2715]: E0913 00:15:15.594906 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.594962 kubelet[2715]: W0913 00:15:15.594938 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.594962 kubelet[2715]: E0913 00:15:15.594965 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.595407 kubelet[2715]: E0913 00:15:15.595377 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.595407 kubelet[2715]: W0913 00:15:15.595394 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.595407 kubelet[2715]: E0913 00:15:15.595406 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.595706 kubelet[2715]: E0913 00:15:15.595659 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.595706 kubelet[2715]: W0913 00:15:15.595680 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.595706 kubelet[2715]: E0913 00:15:15.595690 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.596016 kubelet[2715]: E0913 00:15:15.595982 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.596016 kubelet[2715]: W0913 00:15:15.596000 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.596016 kubelet[2715]: E0913 00:15:15.596012 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.596295 kubelet[2715]: E0913 00:15:15.596274 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.596295 kubelet[2715]: W0913 00:15:15.596290 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.596416 kubelet[2715]: E0913 00:15:15.596302 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.596575 kubelet[2715]: E0913 00:15:15.596554 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.596619 kubelet[2715]: W0913 00:15:15.596580 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.596619 kubelet[2715]: E0913 00:15:15.596593 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.596937 kubelet[2715]: E0913 00:15:15.596896 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.596937 kubelet[2715]: W0913 00:15:15.596914 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.596937 kubelet[2715]: E0913 00:15:15.596927 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.597247 kubelet[2715]: E0913 00:15:15.597216 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.597247 kubelet[2715]: W0913 00:15:15.597233 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.597247 kubelet[2715]: E0913 00:15:15.597244 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.597642 kubelet[2715]: E0913 00:15:15.597608 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.597642 kubelet[2715]: W0913 00:15:15.597630 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.597726 kubelet[2715]: E0913 00:15:15.597645 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.597991 kubelet[2715]: E0913 00:15:15.597956 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.597991 kubelet[2715]: W0913 00:15:15.597975 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.597991 kubelet[2715]: E0913 00:15:15.597987 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.598229 kubelet[2715]: E0913 00:15:15.598216 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.598229 kubelet[2715]: W0913 00:15:15.598228 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.598331 kubelet[2715]: E0913 00:15:15.598240 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.598545 kubelet[2715]: E0913 00:15:15.598512 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.598545 kubelet[2715]: W0913 00:15:15.598527 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.598545 kubelet[2715]: E0913 00:15:15.598539 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.598810 kubelet[2715]: E0913 00:15:15.598782 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.598810 kubelet[2715]: W0913 00:15:15.598797 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.598810 kubelet[2715]: E0913 00:15:15.598808 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.599113 kubelet[2715]: E0913 00:15:15.599089 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.599113 kubelet[2715]: W0913 00:15:15.599104 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.599113 kubelet[2715]: E0913 00:15:15.599116 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.599423 kubelet[2715]: E0913 00:15:15.599401 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.599423 kubelet[2715]: W0913 00:15:15.599416 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.599525 kubelet[2715]: E0913 00:15:15.599427 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.613719 kubelet[2715]: E0913 00:15:15.613676 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.613719 kubelet[2715]: W0913 00:15:15.613706 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.613719 kubelet[2715]: E0913 00:15:15.613730 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.614267 kubelet[2715]: E0913 00:15:15.614211 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.614267 kubelet[2715]: W0913 00:15:15.614244 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.614267 kubelet[2715]: E0913 00:15:15.614286 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.614752 kubelet[2715]: E0913 00:15:15.614733 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.614752 kubelet[2715]: W0913 00:15:15.614747 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.614836 kubelet[2715]: E0913 00:15:15.614763 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.615157 kubelet[2715]: E0913 00:15:15.615118 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.615157 kubelet[2715]: W0913 00:15:15.615142 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.615254 kubelet[2715]: E0913 00:15:15.615164 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.615762 kubelet[2715]: E0913 00:15:15.615740 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.615838 kubelet[2715]: W0913 00:15:15.615769 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.615838 kubelet[2715]: E0913 00:15:15.615810 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.616130 kubelet[2715]: E0913 00:15:15.616094 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.616130 kubelet[2715]: W0913 00:15:15.616112 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.616214 kubelet[2715]: E0913 00:15:15.616195 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.616522 kubelet[2715]: E0913 00:15:15.616507 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.616567 kubelet[2715]: W0913 00:15:15.616521 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.616654 kubelet[2715]: E0913 00:15:15.616634 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.616758 kubelet[2715]: E0913 00:15:15.616740 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.616758 kubelet[2715]: W0913 00:15:15.616754 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.616853 kubelet[2715]: E0913 00:15:15.616771 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.617101 kubelet[2715]: E0913 00:15:15.617081 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.617101 kubelet[2715]: W0913 00:15:15.617097 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.617178 kubelet[2715]: E0913 00:15:15.617114 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.617454 kubelet[2715]: E0913 00:15:15.617436 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.617454 kubelet[2715]: W0913 00:15:15.617450 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.617566 kubelet[2715]: E0913 00:15:15.617466 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.617701 kubelet[2715]: E0913 00:15:15.617684 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.617701 kubelet[2715]: W0913 00:15:15.617697 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.617764 kubelet[2715]: E0913 00:15:15.617711 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.618041 kubelet[2715]: E0913 00:15:15.618000 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.618041 kubelet[2715]: W0913 00:15:15.618019 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.618154 kubelet[2715]: E0913 00:15:15.618096 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.618578 kubelet[2715]: E0913 00:15:15.618534 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.618578 kubelet[2715]: W0913 00:15:15.618547 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.618756 kubelet[2715]: E0913 00:15:15.618630 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.618756 kubelet[2715]: E0913 00:15:15.618746 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.618756 kubelet[2715]: W0913 00:15:15.618754 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.618851 kubelet[2715]: E0913 00:15:15.618787 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.619003 kubelet[2715]: E0913 00:15:15.618986 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.619003 kubelet[2715]: W0913 00:15:15.618999 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.619085 kubelet[2715]: E0913 00:15:15.619013 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.619326 kubelet[2715]: E0913 00:15:15.619296 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.619369 kubelet[2715]: W0913 00:15:15.619344 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.619369 kubelet[2715]: E0913 00:15:15.619355 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.619609 kubelet[2715]: E0913 00:15:15.619588 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.619609 kubelet[2715]: W0913 00:15:15.619605 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.619689 kubelet[2715]: E0913 00:15:15.619614 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:15.620103 kubelet[2715]: E0913 00:15:15.620085 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:15.620103 kubelet[2715]: W0913 00:15:15.620099 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:15.620170 kubelet[2715]: E0913 00:15:15.620111 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.455691 kubelet[2715]: E0913 00:15:16.455626 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:16.527538 kubelet[2715]: E0913 00:15:16.527498 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:16.571552 containerd[1576]: time="2025-09-13T00:15:16.571467357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:16.573078 containerd[1576]: time="2025-09-13T00:15:16.573003593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:15:16.574418 containerd[1576]: time="2025-09-13T00:15:16.574350567Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:16.579356 containerd[1576]: time="2025-09-13T00:15:16.577972575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:16.580108 containerd[1576]: time="2025-09-13T00:15:16.580043901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.14989392s" Sep 13 00:15:16.580108 containerd[1576]: time="2025-09-13T00:15:16.580093343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:15:16.582538 containerd[1576]: time="2025-09-13T00:15:16.582498388Z" level=info msg="CreateContainer within sandbox \"93ec7d90b884d9747b07e72ef2661c32a0e0bfdb7baae17824d7208b22823db0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:15:16.605760 kubelet[2715]: E0913 00:15:16.605702 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.605760 kubelet[2715]: W0913 00:15:16.605735 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.605760 kubelet[2715]: E0913 00:15:16.605763 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.606068 kubelet[2715]: E0913 00:15:16.606051 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.606068 kubelet[2715]: W0913 00:15:16.606063 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.606068 kubelet[2715]: E0913 00:15:16.606072 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.606451 kubelet[2715]: E0913 00:15:16.606433 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.606451 kubelet[2715]: W0913 00:15:16.606447 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.606451 kubelet[2715]: E0913 00:15:16.606458 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.606711 kubelet[2715]: E0913 00:15:16.606688 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.606711 kubelet[2715]: W0913 00:15:16.606700 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.606711 kubelet[2715]: E0913 00:15:16.606710 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.606947 kubelet[2715]: E0913 00:15:16.606932 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.606947 kubelet[2715]: W0913 00:15:16.606944 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.607006 kubelet[2715]: E0913 00:15:16.606952 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.607150 kubelet[2715]: E0913 00:15:16.607136 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.607150 kubelet[2715]: W0913 00:15:16.607147 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.607200 kubelet[2715]: E0913 00:15:16.607156 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.607369 kubelet[2715]: E0913 00:15:16.607353 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.607369 kubelet[2715]: W0913 00:15:16.607367 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.607427 kubelet[2715]: E0913 00:15:16.607378 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.607623 kubelet[2715]: E0913 00:15:16.607608 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.607623 kubelet[2715]: W0913 00:15:16.607621 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.607670 kubelet[2715]: E0913 00:15:16.607631 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.607918 kubelet[2715]: E0913 00:15:16.607891 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.607918 kubelet[2715]: W0913 00:15:16.607908 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.607974 kubelet[2715]: E0913 00:15:16.607921 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.608150 kubelet[2715]: E0913 00:15:16.608128 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.608150 kubelet[2715]: W0913 00:15:16.608140 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.608201 kubelet[2715]: E0913 00:15:16.608150 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.608378 kubelet[2715]: E0913 00:15:16.608359 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.608378 kubelet[2715]: W0913 00:15:16.608372 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.608378 kubelet[2715]: E0913 00:15:16.608382 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.608621 kubelet[2715]: E0913 00:15:16.608601 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.608621 kubelet[2715]: W0913 00:15:16.608612 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.608621 kubelet[2715]: E0913 00:15:16.608622 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.608900 kubelet[2715]: E0913 00:15:16.608879 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.608900 kubelet[2715]: W0913 00:15:16.608893 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.608985 kubelet[2715]: E0913 00:15:16.608906 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.609163 kubelet[2715]: E0913 00:15:16.609144 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.609163 kubelet[2715]: W0913 00:15:16.609161 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.609256 kubelet[2715]: E0913 00:15:16.609172 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.609602 kubelet[2715]: E0913 00:15:16.609563 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.611277 kubelet[2715]: W0913 00:15:16.609601 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.611277 kubelet[2715]: E0913 00:15:16.609637 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.620129 kubelet[2715]: E0913 00:15:16.620086 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.620129 kubelet[2715]: W0913 00:15:16.620117 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.620349 kubelet[2715]: E0913 00:15:16.620147 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.620641 kubelet[2715]: E0913 00:15:16.620587 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.620641 kubelet[2715]: W0913 00:15:16.620607 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.620641 kubelet[2715]: E0913 00:15:16.620629 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.621081 kubelet[2715]: E0913 00:15:16.621055 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.621148 kubelet[2715]: W0913 00:15:16.621082 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.621148 kubelet[2715]: E0913 00:15:16.621107 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.621488 kubelet[2715]: E0913 00:15:16.621469 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.621488 kubelet[2715]: W0913 00:15:16.621486 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.621592 kubelet[2715]: E0913 00:15:16.621507 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.621853 kubelet[2715]: E0913 00:15:16.621834 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.621853 kubelet[2715]: W0913 00:15:16.621851 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.621927 kubelet[2715]: E0913 00:15:16.621869 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.622150 kubelet[2715]: E0913 00:15:16.622132 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.622150 kubelet[2715]: W0913 00:15:16.622147 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.622221 kubelet[2715]: E0913 00:15:16.622181 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.622456 kubelet[2715]: E0913 00:15:16.622436 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.622456 kubelet[2715]: W0913 00:15:16.622454 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.622614 kubelet[2715]: E0913 00:15:16.622563 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.623015 kubelet[2715]: E0913 00:15:16.622991 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.623015 kubelet[2715]: W0913 00:15:16.623008 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.623116 kubelet[2715]: E0913 00:15:16.623046 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.623843 kubelet[2715]: E0913 00:15:16.623825 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.623843 kubelet[2715]: W0913 00:15:16.623839 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.623908 kubelet[2715]: E0913 00:15:16.623857 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.624199 kubelet[2715]: E0913 00:15:16.624176 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.624199 kubelet[2715]: W0913 00:15:16.624192 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.624346 kubelet[2715]: E0913 00:15:16.624231 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.624572 kubelet[2715]: E0913 00:15:16.624544 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.624572 kubelet[2715]: W0913 00:15:16.624559 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.624684 kubelet[2715]: E0913 00:15:16.624599 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.624868 kubelet[2715]: E0913 00:15:16.624847 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.624868 kubelet[2715]: W0913 00:15:16.624861 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.624940 kubelet[2715]: E0913 00:15:16.624895 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.625218 kubelet[2715]: E0913 00:15:16.625179 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.625279 kubelet[2715]: W0913 00:15:16.625215 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.625351 kubelet[2715]: E0913 00:15:16.625293 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.626157 kubelet[2715]: E0913 00:15:16.625621 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.626157 kubelet[2715]: W0913 00:15:16.626152 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.626360 kubelet[2715]: E0913 00:15:16.626180 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.626639 kubelet[2715]: E0913 00:15:16.626617 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.626639 kubelet[2715]: W0913 00:15:16.626631 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.626735 kubelet[2715]: E0913 00:15:16.626677 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.627452 kubelet[2715]: E0913 00:15:16.627369 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.627452 kubelet[2715]: W0913 00:15:16.627449 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.627536 kubelet[2715]: E0913 00:15:16.627470 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.628001 kubelet[2715]: E0913 00:15:16.627985 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.628001 kubelet[2715]: W0913 00:15:16.627997 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.628079 kubelet[2715]: E0913 00:15:16.628008 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.628372 kubelet[2715]: E0913 00:15:16.628354 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:15:16.628372 kubelet[2715]: W0913 00:15:16.628369 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:15:16.628447 kubelet[2715]: E0913 00:15:16.628382 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:15:16.775132 containerd[1576]: time="2025-09-13T00:15:16.774948638Z" level=info msg="CreateContainer within sandbox \"93ec7d90b884d9747b07e72ef2661c32a0e0bfdb7baae17824d7208b22823db0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1682e555c91a605e539f3b98c90b05cc3c0270779c578baf6a33dd2663b48f23\"" Sep 13 00:15:16.775611 containerd[1576]: time="2025-09-13T00:15:16.775584116Z" level=info msg="StartContainer for \"1682e555c91a605e539f3b98c90b05cc3c0270779c578baf6a33dd2663b48f23\"" Sep 13 00:15:16.870646 containerd[1576]: time="2025-09-13T00:15:16.870579147Z" level=info msg="StartContainer for \"1682e555c91a605e539f3b98c90b05cc3c0270779c578baf6a33dd2663b48f23\" returns successfully" Sep 13 00:15:16.919763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1682e555c91a605e539f3b98c90b05cc3c0270779c578baf6a33dd2663b48f23-rootfs.mount: Deactivated successfully. Sep 13 00:15:16.944016 containerd[1576]: time="2025-09-13T00:15:16.941739067Z" level=info msg="shim disconnected" id=1682e555c91a605e539f3b98c90b05cc3c0270779c578baf6a33dd2663b48f23 namespace=k8s.io Sep 13 00:15:16.944016 containerd[1576]: time="2025-09-13T00:15:16.944007087Z" level=warning msg="cleaning up after shim disconnected" id=1682e555c91a605e539f3b98c90b05cc3c0270779c578baf6a33dd2663b48f23 namespace=k8s.io Sep 13 00:15:16.944016 containerd[1576]: time="2025-09-13T00:15:16.944025963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:15:17.543536 kubelet[2715]: E0913 00:15:17.543482 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:17.549578 containerd[1576]: time="2025-09-13T00:15:17.549524016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:15:18.455365 kubelet[2715]: E0913 00:15:18.455246 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:20.453364 kubelet[2715]: E0913 00:15:20.453278 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:21.997922 containerd[1576]: time="2025-09-13T00:15:21.997867337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:21.998804 containerd[1576]: time="2025-09-13T00:15:21.998695176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:15:22.000174 containerd[1576]: time="2025-09-13T00:15:22.000141384Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:22.002906 containerd[1576]: time="2025-09-13T00:15:22.002557605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:22.003582 containerd[1576]: time="2025-09-13T00:15:22.003542147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.453955244s" Sep 13 00:15:22.003649 containerd[1576]: time="2025-09-13T00:15:22.003590406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:15:22.011294 containerd[1576]: time="2025-09-13T00:15:22.011231766Z" level=info msg="CreateContainer within sandbox \"93ec7d90b884d9747b07e72ef2661c32a0e0bfdb7baae17824d7208b22823db0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:15:22.029652 containerd[1576]: time="2025-09-13T00:15:22.029597296Z" level=info msg="CreateContainer within sandbox \"93ec7d90b884d9747b07e72ef2661c32a0e0bfdb7baae17824d7208b22823db0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"84644e84e3e73315530ef1289b869455b73aecb7c4cb6c7c2abffaeb979dcf57\"" Sep 13 00:15:22.030195 containerd[1576]: time="2025-09-13T00:15:22.030164220Z" level=info msg="StartContainer for \"84644e84e3e73315530ef1289b869455b73aecb7c4cb6c7c2abffaeb979dcf57\"" Sep 13 00:15:22.064249 systemd[1]: run-containerd-runc-k8s.io-84644e84e3e73315530ef1289b869455b73aecb7c4cb6c7c2abffaeb979dcf57-runc.DJjAm9.mount: Deactivated successfully. Sep 13 00:15:22.406412 containerd[1576]: time="2025-09-13T00:15:22.405541257Z" level=info msg="StartContainer for \"84644e84e3e73315530ef1289b869455b73aecb7c4cb6c7c2abffaeb979dcf57\" returns successfully" Sep 13 00:15:22.454545 kubelet[2715]: E0913 00:15:22.454451 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:23.861890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84644e84e3e73315530ef1289b869455b73aecb7c4cb6c7c2abffaeb979dcf57-rootfs.mount: Deactivated successfully. Sep 13 00:15:23.911188 containerd[1576]: time="2025-09-13T00:15:23.911097920Z" level=info msg="shim disconnected" id=84644e84e3e73315530ef1289b869455b73aecb7c4cb6c7c2abffaeb979dcf57 namespace=k8s.io Sep 13 00:15:23.911188 containerd[1576]: time="2025-09-13T00:15:23.911175184Z" level=warning msg="cleaning up after shim disconnected" id=84644e84e3e73315530ef1289b869455b73aecb7c4cb6c7c2abffaeb979dcf57 namespace=k8s.io Sep 13 00:15:23.911188 containerd[1576]: time="2025-09-13T00:15:23.911191734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:15:23.953693 kubelet[2715]: I0913 00:15:23.952822 2715 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:15:24.077703 kubelet[2715]: I0913 00:15:24.077608 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10029b49-ba84-4580-85a1-aa34fcb74230-config-volume\") pod \"coredns-7c65d6cfc9-rbsdl\" (UID: \"10029b49-ba84-4580-85a1-aa34fcb74230\") " pod="kube-system/coredns-7c65d6cfc9-rbsdl" Sep 13 00:15:24.077703 kubelet[2715]: I0913 00:15:24.077679 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtct7\" (UniqueName: \"kubernetes.io/projected/1571c160-5cc3-4360-89ff-652e3fc95a50-kube-api-access-wtct7\") pod \"calico-apiserver-d5556b5db-vwqmg\" (UID: \"1571c160-5cc3-4360-89ff-652e3fc95a50\") " pod="calico-apiserver/calico-apiserver-d5556b5db-vwqmg" Sep 13 00:15:24.077703 kubelet[2715]: I0913 00:15:24.077705 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0877890-9528-461c-9cba-50fe493297ef-calico-apiserver-certs\") pod \"calico-apiserver-d5556b5db-ljn92\" (UID: \"e0877890-9528-461c-9cba-50fe493297ef\") " pod="calico-apiserver/calico-apiserver-d5556b5db-ljn92" Sep 13 00:15:24.077703 kubelet[2715]: I0913 00:15:24.077723 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bktlv\" (UniqueName: \"kubernetes.io/projected/e0877890-9528-461c-9cba-50fe493297ef-kube-api-access-bktlv\") pod \"calico-apiserver-d5556b5db-ljn92\" (UID: \"e0877890-9528-461c-9cba-50fe493297ef\") " pod="calico-apiserver/calico-apiserver-d5556b5db-ljn92" Sep 13 00:15:24.078025 kubelet[2715]: I0913 00:15:24.077746 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvl4l\" (UniqueName: \"kubernetes.io/projected/feb52a36-15cd-4f86-a703-48f9b874433f-kube-api-access-cvl4l\") pod \"coredns-7c65d6cfc9-jvqvw\" (UID: \"feb52a36-15cd-4f86-a703-48f9b874433f\") " pod="kube-system/coredns-7c65d6cfc9-jvqvw" Sep 13 00:15:24.078025 kubelet[2715]: I0913 00:15:24.077766 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssh5d\" (UniqueName: \"kubernetes.io/projected/59de3468-bd8b-4b91-bcdd-b230839ec151-kube-api-access-ssh5d\") pod \"calico-kube-controllers-699fc4d87c-lvdzr\" (UID: \"59de3468-bd8b-4b91-bcdd-b230839ec151\") " pod="calico-system/calico-kube-controllers-699fc4d87c-lvdzr" Sep 13 00:15:24.078025 kubelet[2715]: I0913 00:15:24.077788 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1571c160-5cc3-4360-89ff-652e3fc95a50-calico-apiserver-certs\") pod \"calico-apiserver-d5556b5db-vwqmg\" (UID: \"1571c160-5cc3-4360-89ff-652e3fc95a50\") " pod="calico-apiserver/calico-apiserver-d5556b5db-vwqmg" Sep 13 00:15:24.078025 kubelet[2715]: I0913 00:15:24.077808 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7af872b-9925-42a7-9b22-c2f24c9b6443-goldmane-ca-bundle\") pod \"goldmane-7988f88666-zxh96\" (UID: \"f7af872b-9925-42a7-9b22-c2f24c9b6443\") " pod="calico-system/goldmane-7988f88666-zxh96" Sep 13 00:15:24.078025 kubelet[2715]: I0913 00:15:24.077831 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-backend-key-pair\") pod \"whisker-77ff67458c-fc2pm\" (UID: \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\") " pod="calico-system/whisker-77ff67458c-fc2pm" Sep 13 00:15:24.078189 kubelet[2715]: I0913 00:15:24.077850 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-ca-bundle\") pod \"whisker-77ff67458c-fc2pm\" (UID: \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\") " pod="calico-system/whisker-77ff67458c-fc2pm" Sep 13 00:15:24.078189 kubelet[2715]: I0913 00:15:24.077873 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59de3468-bd8b-4b91-bcdd-b230839ec151-tigera-ca-bundle\") pod \"calico-kube-controllers-699fc4d87c-lvdzr\" (UID: \"59de3468-bd8b-4b91-bcdd-b230839ec151\") " pod="calico-system/calico-kube-controllers-699fc4d87c-lvdzr" Sep 13 00:15:24.078189 kubelet[2715]: I0913 00:15:24.077895 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f7af872b-9925-42a7-9b22-c2f24c9b6443-goldmane-key-pair\") pod \"goldmane-7988f88666-zxh96\" (UID: \"f7af872b-9925-42a7-9b22-c2f24c9b6443\") " pod="calico-system/goldmane-7988f88666-zxh96" Sep 13 00:15:24.078189 kubelet[2715]: I0913 00:15:24.077912 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/feb52a36-15cd-4f86-a703-48f9b874433f-config-volume\") pod \"coredns-7c65d6cfc9-jvqvw\" (UID: \"feb52a36-15cd-4f86-a703-48f9b874433f\") " pod="kube-system/coredns-7c65d6cfc9-jvqvw" Sep 13 00:15:24.078189 kubelet[2715]: I0913 00:15:24.077929 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7af872b-9925-42a7-9b22-c2f24c9b6443-config\") pod \"goldmane-7988f88666-zxh96\" (UID: \"f7af872b-9925-42a7-9b22-c2f24c9b6443\") " pod="calico-system/goldmane-7988f88666-zxh96" Sep 13 00:15:24.078377 kubelet[2715]: I0913 00:15:24.077946 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwkx7\" (UniqueName: \"kubernetes.io/projected/f7af872b-9925-42a7-9b22-c2f24c9b6443-kube-api-access-qwkx7\") pod \"goldmane-7988f88666-zxh96\" (UID: \"f7af872b-9925-42a7-9b22-c2f24c9b6443\") " pod="calico-system/goldmane-7988f88666-zxh96" Sep 13 00:15:24.078377 kubelet[2715]: I0913 00:15:24.077964 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgt6c\" (UniqueName: \"kubernetes.io/projected/256d1bd6-e1a5-4733-8ae9-387ed13a8847-kube-api-access-mgt6c\") pod \"whisker-77ff67458c-fc2pm\" (UID: \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\") " pod="calico-system/whisker-77ff67458c-fc2pm" Sep 13 00:15:24.078377 kubelet[2715]: I0913 00:15:24.077984 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwt54\" (UniqueName: \"kubernetes.io/projected/10029b49-ba84-4580-85a1-aa34fcb74230-kube-api-access-wwt54\") pod \"coredns-7c65d6cfc9-rbsdl\" (UID: \"10029b49-ba84-4580-85a1-aa34fcb74230\") " pod="kube-system/coredns-7c65d6cfc9-rbsdl" Sep 13 00:15:24.301925 containerd[1576]: time="2025-09-13T00:15:24.301681040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699fc4d87c-lvdzr,Uid:59de3468-bd8b-4b91-bcdd-b230839ec151,Namespace:calico-system,Attempt:0,}" Sep 13 00:15:24.304309 kubelet[2715]: E0913 00:15:24.304123 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:24.305263 containerd[1576]: time="2025-09-13T00:15:24.305213104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jvqvw,Uid:feb52a36-15cd-4f86-a703-48f9b874433f,Namespace:kube-system,Attempt:0,}" Sep 13 00:15:24.305344 containerd[1576]: time="2025-09-13T00:15:24.305266424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77ff67458c-fc2pm,Uid:256d1bd6-e1a5-4733-8ae9-387ed13a8847,Namespace:calico-system,Attempt:0,}" Sep 13 00:15:24.305972 containerd[1576]: time="2025-09-13T00:15:24.305594223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-vwqmg,Uid:1571c160-5cc3-4360-89ff-652e3fc95a50,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:15:24.305972 containerd[1576]: time="2025-09-13T00:15:24.305219576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-ljn92,Uid:e0877890-9528-461c-9cba-50fe493297ef,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:15:24.306418 kubelet[2715]: E0913 00:15:24.306365 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:24.306664 containerd[1576]: time="2025-09-13T00:15:24.306593674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zxh96,Uid:f7af872b-9925-42a7-9b22-c2f24c9b6443,Namespace:calico-system,Attempt:0,}" Sep 13 00:15:24.307067 containerd[1576]: time="2025-09-13T00:15:24.306996444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rbsdl,Uid:10029b49-ba84-4580-85a1-aa34fcb74230,Namespace:kube-system,Attempt:0,}" Sep 13 00:15:24.488857 containerd[1576]: time="2025-09-13T00:15:24.488409872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxtsh,Uid:aadb0960-6652-43de-9a7a-a3600967a9bb,Namespace:calico-system,Attempt:0,}" Sep 13 00:15:24.578170 containerd[1576]: time="2025-09-13T00:15:24.577812339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:15:24.601405 containerd[1576]: time="2025-09-13T00:15:24.601283389Z" level=error msg="Failed to destroy network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.604380 containerd[1576]: time="2025-09-13T00:15:24.601860483Z" level=error msg="encountered an error cleaning up failed sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.604380 containerd[1576]: time="2025-09-13T00:15:24.601922298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jvqvw,Uid:feb52a36-15cd-4f86-a703-48f9b874433f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.610361 kubelet[2715]: E0913 00:15:24.610275 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.610610 kubelet[2715]: E0913 00:15:24.610421 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jvqvw" Sep 13 00:15:24.610610 kubelet[2715]: E0913 00:15:24.610450 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jvqvw" Sep 13 00:15:24.610610 kubelet[2715]: E0913 00:15:24.610513 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-jvqvw_kube-system(feb52a36-15cd-4f86-a703-48f9b874433f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-jvqvw_kube-system(feb52a36-15cd-4f86-a703-48f9b874433f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jvqvw" podUID="feb52a36-15cd-4f86-a703-48f9b874433f" Sep 13 00:15:24.625926 containerd[1576]: time="2025-09-13T00:15:24.625869845Z" level=error msg="Failed to destroy network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.626792 containerd[1576]: time="2025-09-13T00:15:24.626766574Z" level=error msg="encountered an error cleaning up failed sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.626963 containerd[1576]: time="2025-09-13T00:15:24.626923706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-ljn92,Uid:e0877890-9528-461c-9cba-50fe493297ef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.628613 kubelet[2715]: E0913 00:15:24.627515 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.628613 kubelet[2715]: E0913 00:15:24.627652 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5556b5db-ljn92" Sep 13 00:15:24.628613 kubelet[2715]: E0913 00:15:24.627678 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5556b5db-ljn92" Sep 13 00:15:24.628744 kubelet[2715]: E0913 00:15:24.627725 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5556b5db-ljn92_calico-apiserver(e0877890-9528-461c-9cba-50fe493297ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5556b5db-ljn92_calico-apiserver(e0877890-9528-461c-9cba-50fe493297ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5556b5db-ljn92" podUID="e0877890-9528-461c-9cba-50fe493297ef" Sep 13 00:15:24.636494 containerd[1576]: time="2025-09-13T00:15:24.636390749Z" level=error msg="Failed to destroy network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.637601 containerd[1576]: time="2025-09-13T00:15:24.637553783Z" level=error msg="encountered an error cleaning up failed sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.637671 containerd[1576]: time="2025-09-13T00:15:24.637639392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-vwqmg,Uid:1571c160-5cc3-4360-89ff-652e3fc95a50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.638043 kubelet[2715]: E0913 00:15:24.637975 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.638518 kubelet[2715]: E0913 00:15:24.638057 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5556b5db-vwqmg" Sep 13 00:15:24.638518 kubelet[2715]: E0913 00:15:24.638083 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d5556b5db-vwqmg" Sep 13 00:15:24.638518 kubelet[2715]: E0913 00:15:24.638130 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5556b5db-vwqmg_calico-apiserver(1571c160-5cc3-4360-89ff-652e3fc95a50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5556b5db-vwqmg_calico-apiserver(1571c160-5cc3-4360-89ff-652e3fc95a50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5556b5db-vwqmg" podUID="1571c160-5cc3-4360-89ff-652e3fc95a50" Sep 13 00:15:24.648394 containerd[1576]: time="2025-09-13T00:15:24.648339410Z" level=error msg="Failed to destroy network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.650897 containerd[1576]: time="2025-09-13T00:15:24.650830226Z" level=error msg="encountered an error cleaning up failed sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.651046 containerd[1576]: time="2025-09-13T00:15:24.650905476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699fc4d87c-lvdzr,Uid:59de3468-bd8b-4b91-bcdd-b230839ec151,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.651190 kubelet[2715]: E0913 00:15:24.651150 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.651262 kubelet[2715]: E0913 00:15:24.651220 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699fc4d87c-lvdzr" Sep 13 00:15:24.651262 kubelet[2715]: E0913 00:15:24.651242 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699fc4d87c-lvdzr" Sep 13 00:15:24.651435 kubelet[2715]: E0913 00:15:24.651285 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-699fc4d87c-lvdzr_calico-system(59de3468-bd8b-4b91-bcdd-b230839ec151)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-699fc4d87c-lvdzr_calico-system(59de3468-bd8b-4b91-bcdd-b230839ec151)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699fc4d87c-lvdzr" podUID="59de3468-bd8b-4b91-bcdd-b230839ec151" Sep 13 00:15:24.652156 containerd[1576]: time="2025-09-13T00:15:24.651904025Z" level=error msg="Failed to destroy network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.653852 containerd[1576]: time="2025-09-13T00:15:24.653817347Z" level=error msg="encountered an error cleaning up failed sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.653971 containerd[1576]: time="2025-09-13T00:15:24.653947770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77ff67458c-fc2pm,Uid:256d1bd6-e1a5-4733-8ae9-387ed13a8847,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.654297 kubelet[2715]: E0913 00:15:24.654248 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.654397 kubelet[2715]: E0913 00:15:24.654340 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77ff67458c-fc2pm" Sep 13 00:15:24.654397 kubelet[2715]: E0913 00:15:24.654365 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77ff67458c-fc2pm" Sep 13 00:15:24.654564 kubelet[2715]: E0913 00:15:24.654473 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77ff67458c-fc2pm_calico-system(256d1bd6-e1a5-4733-8ae9-387ed13a8847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77ff67458c-fc2pm_calico-system(256d1bd6-e1a5-4733-8ae9-387ed13a8847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77ff67458c-fc2pm" podUID="256d1bd6-e1a5-4733-8ae9-387ed13a8847" Sep 13 00:15:24.657962 containerd[1576]: time="2025-09-13T00:15:24.657920364Z" level=error msg="Failed to destroy network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.658618 containerd[1576]: time="2025-09-13T00:15:24.658591804Z" level=error msg="encountered an error cleaning up failed sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.658786 containerd[1576]: time="2025-09-13T00:15:24.658730793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rbsdl,Uid:10029b49-ba84-4580-85a1-aa34fcb74230,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.659472 kubelet[2715]: E0913 00:15:24.659400 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.659549 kubelet[2715]: E0913 00:15:24.659498 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rbsdl" Sep 13 00:15:24.659549 kubelet[2715]: E0913 00:15:24.659524 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rbsdl" Sep 13 00:15:24.659622 kubelet[2715]: E0913 00:15:24.659574 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rbsdl_kube-system(10029b49-ba84-4580-85a1-aa34fcb74230)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rbsdl_kube-system(10029b49-ba84-4580-85a1-aa34fcb74230)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rbsdl" podUID="10029b49-ba84-4580-85a1-aa34fcb74230" Sep 13 00:15:24.663999 containerd[1576]: time="2025-09-13T00:15:24.663937194Z" level=error msg="Failed to destroy network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.664447 containerd[1576]: time="2025-09-13T00:15:24.664395367Z" level=error msg="encountered an error cleaning up failed sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.664517 containerd[1576]: time="2025-09-13T00:15:24.664457963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zxh96,Uid:f7af872b-9925-42a7-9b22-c2f24c9b6443,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.665117 kubelet[2715]: E0913 00:15:24.664706 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.665117 kubelet[2715]: E0913 00:15:24.664785 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zxh96" Sep 13 00:15:24.665117 kubelet[2715]: E0913 00:15:24.664816 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zxh96" Sep 13 00:15:24.666437 kubelet[2715]: E0913 00:15:24.664872 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-zxh96_calico-system(f7af872b-9925-42a7-9b22-c2f24c9b6443)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-zxh96_calico-system(f7af872b-9925-42a7-9b22-c2f24c9b6443)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zxh96" podUID="f7af872b-9925-42a7-9b22-c2f24c9b6443" Sep 13 00:15:24.675898 containerd[1576]: time="2025-09-13T00:15:24.675836093Z" level=error msg="Failed to destroy network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.676417 containerd[1576]: time="2025-09-13T00:15:24.676372462Z" level=error msg="encountered an error cleaning up failed sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.676467 containerd[1576]: time="2025-09-13T00:15:24.676442502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxtsh,Uid:aadb0960-6652-43de-9a7a-a3600967a9bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.676710 kubelet[2715]: E0913 00:15:24.676646 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:24.676710 kubelet[2715]: E0913 00:15:24.676715 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxtsh" Sep 13 00:15:24.676980 kubelet[2715]: E0913 00:15:24.676738 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lxtsh" Sep 13 00:15:24.676980 kubelet[2715]: E0913 00:15:24.676781 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lxtsh_calico-system(aadb0960-6652-43de-9a7a-a3600967a9bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lxtsh_calico-system(aadb0960-6652-43de-9a7a-a3600967a9bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:25.579076 kubelet[2715]: I0913 00:15:25.579029 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:25.580393 kubelet[2715]: I0913 00:15:25.580352 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:25.615008 kubelet[2715]: I0913 00:15:25.614936 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:25.619066 kubelet[2715]: I0913 00:15:25.618522 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:25.619534 containerd[1576]: time="2025-09-13T00:15:25.619487060Z" level=info msg="StopPodSandbox for \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\"" Sep 13 00:15:25.620300 containerd[1576]: time="2025-09-13T00:15:25.620192594Z" level=info msg="StopPodSandbox for \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\"" Sep 13 00:15:25.621060 containerd[1576]: time="2025-09-13T00:15:25.620997092Z" level=info msg="StopPodSandbox for \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\"" Sep 13 00:15:25.622678 containerd[1576]: time="2025-09-13T00:15:25.622646885Z" level=info msg="StopPodSandbox for \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\"" Sep 13 00:15:25.624251 containerd[1576]: time="2025-09-13T00:15:25.624214985Z" level=info msg="Ensure that sandbox 173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616 in task-service has been cleanup successfully" Sep 13 00:15:25.624428 containerd[1576]: time="2025-09-13T00:15:25.624354284Z" level=info msg="Ensure that sandbox c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae in task-service has been cleanup successfully" Sep 13 00:15:25.626250 containerd[1576]: time="2025-09-13T00:15:25.624750071Z" level=info msg="Ensure that sandbox 581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7 in task-service has been cleanup successfully" Sep 13 00:15:25.626250 containerd[1576]: time="2025-09-13T00:15:25.625396295Z" level=info msg="Ensure that sandbox 4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7 in task-service has been cleanup successfully" Sep 13 00:15:25.630214 kubelet[2715]: I0913 00:15:25.630165 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:25.632672 containerd[1576]: time="2025-09-13T00:15:25.632637921Z" level=info msg="StopPodSandbox for \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\"" Sep 13 00:15:25.633481 containerd[1576]: time="2025-09-13T00:15:25.633457397Z" level=info msg="Ensure that sandbox 5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619 in task-service has been cleanup successfully" Sep 13 00:15:25.634760 kubelet[2715]: I0913 00:15:25.634734 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:25.636265 containerd[1576]: time="2025-09-13T00:15:25.636238606Z" level=info msg="StopPodSandbox for \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\"" Sep 13 00:15:25.637897 kubelet[2715]: I0913 00:15:25.637879 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:25.638611 containerd[1576]: time="2025-09-13T00:15:25.638576189Z" level=info msg="Ensure that sandbox 6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929 in task-service has been cleanup successfully" Sep 13 00:15:25.639120 containerd[1576]: time="2025-09-13T00:15:25.639098211Z" level=info msg="StopPodSandbox for \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\"" Sep 13 00:15:25.639810 containerd[1576]: time="2025-09-13T00:15:25.639776495Z" level=info msg="Ensure that sandbox 89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7 in task-service has been cleanup successfully" Sep 13 00:15:25.653158 kubelet[2715]: I0913 00:15:25.653113 2715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:25.656921 containerd[1576]: time="2025-09-13T00:15:25.656389713Z" level=info msg="StopPodSandbox for \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\"" Sep 13 00:15:25.658172 containerd[1576]: time="2025-09-13T00:15:25.658136937Z" level=info msg="Ensure that sandbox 86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f in task-service has been cleanup successfully" Sep 13 00:15:25.696422 containerd[1576]: time="2025-09-13T00:15:25.696352998Z" level=error msg="StopPodSandbox for \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\" failed" error="failed to destroy network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.696993 kubelet[2715]: E0913 00:15:25.696758 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:25.696993 kubelet[2715]: E0913 00:15:25.696815 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae"} Sep 13 00:15:25.696993 kubelet[2715]: E0913 00:15:25.696874 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10029b49-ba84-4580-85a1-aa34fcb74230\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.696993 kubelet[2715]: E0913 00:15:25.696896 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10029b49-ba84-4580-85a1-aa34fcb74230\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rbsdl" podUID="10029b49-ba84-4580-85a1-aa34fcb74230" Sep 13 00:15:25.698906 containerd[1576]: time="2025-09-13T00:15:25.698795226Z" level=error msg="StopPodSandbox for \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\" failed" error="failed to destroy network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.699164 kubelet[2715]: E0913 00:15:25.698991 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:25.699164 kubelet[2715]: E0913 00:15:25.699064 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7"} Sep 13 00:15:25.699164 kubelet[2715]: E0913 00:15:25.699090 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7af872b-9925-42a7-9b22-c2f24c9b6443\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.699164 kubelet[2715]: E0913 00:15:25.699121 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7af872b-9925-42a7-9b22-c2f24c9b6443\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zxh96" podUID="f7af872b-9925-42a7-9b22-c2f24c9b6443" Sep 13 00:15:25.700528 containerd[1576]: time="2025-09-13T00:15:25.700467069Z" level=error msg="StopPodSandbox for \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\" failed" error="failed to destroy network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.701533 kubelet[2715]: E0913 00:15:25.700738 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:25.701533 kubelet[2715]: E0913 00:15:25.700804 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7"} Sep 13 00:15:25.701533 kubelet[2715]: E0913 00:15:25.700864 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"feb52a36-15cd-4f86-a703-48f9b874433f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.701533 kubelet[2715]: E0913 00:15:25.700898 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"feb52a36-15cd-4f86-a703-48f9b874433f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jvqvw" podUID="feb52a36-15cd-4f86-a703-48f9b874433f" Sep 13 00:15:25.714492 containerd[1576]: time="2025-09-13T00:15:25.714281045Z" level=error msg="StopPodSandbox for \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\" failed" error="failed to destroy network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.715630 kubelet[2715]: E0913 00:15:25.715556 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:25.715630 kubelet[2715]: E0913 00:15:25.715628 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616"} Sep 13 00:15:25.715846 kubelet[2715]: E0913 00:15:25.715666 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1571c160-5cc3-4360-89ff-652e3fc95a50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.715846 kubelet[2715]: E0913 00:15:25.715691 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1571c160-5cc3-4360-89ff-652e3fc95a50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5556b5db-vwqmg" podUID="1571c160-5cc3-4360-89ff-652e3fc95a50" Sep 13 00:15:25.716271 containerd[1576]: time="2025-09-13T00:15:25.716241636Z" level=error msg="StopPodSandbox for \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\" failed" error="failed to destroy network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.716627 kubelet[2715]: E0913 00:15:25.716575 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:25.716688 kubelet[2715]: E0913 00:15:25.716648 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929"} Sep 13 00:15:25.716729 kubelet[2715]: E0913 00:15:25.716687 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59de3468-bd8b-4b91-bcdd-b230839ec151\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.716729 kubelet[2715]: E0913 00:15:25.716713 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59de3468-bd8b-4b91-bcdd-b230839ec151\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699fc4d87c-lvdzr" podUID="59de3468-bd8b-4b91-bcdd-b230839ec151" Sep 13 00:15:25.716998 containerd[1576]: time="2025-09-13T00:15:25.716954013Z" level=error msg="StopPodSandbox for \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\" failed" error="failed to destroy network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.717398 kubelet[2715]: E0913 00:15:25.717277 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:25.717398 kubelet[2715]: E0913 00:15:25.717307 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619"} Sep 13 00:15:25.717398 kubelet[2715]: E0913 00:15:25.717347 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0877890-9528-461c-9cba-50fe493297ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.717398 kubelet[2715]: E0913 00:15:25.717372 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0877890-9528-461c-9cba-50fe493297ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d5556b5db-ljn92" podUID="e0877890-9528-461c-9cba-50fe493297ef" Sep 13 00:15:25.720280 containerd[1576]: time="2025-09-13T00:15:25.720215977Z" level=error msg="StopPodSandbox for \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\" failed" error="failed to destroy network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.720481 kubelet[2715]: E0913 00:15:25.720441 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:25.720532 kubelet[2715]: E0913 00:15:25.720490 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f"} Sep 13 00:15:25.720590 kubelet[2715]: E0913 00:15:25.720570 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aadb0960-6652-43de-9a7a-a3600967a9bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.720643 kubelet[2715]: E0913 00:15:25.720594 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aadb0960-6652-43de-9a7a-a3600967a9bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lxtsh" podUID="aadb0960-6652-43de-9a7a-a3600967a9bb" Sep 13 00:15:25.727106 containerd[1576]: time="2025-09-13T00:15:25.727059412Z" level=error msg="StopPodSandbox for \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\" failed" error="failed to destroy network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:15:25.727330 kubelet[2715]: E0913 00:15:25.727276 2715 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:25.727400 kubelet[2715]: E0913 00:15:25.727336 2715 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7"} Sep 13 00:15:25.727400 kubelet[2715]: E0913 00:15:25.727381 2715 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:15:25.727549 kubelet[2715]: E0913 00:15:25.727402 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77ff67458c-fc2pm" podUID="256d1bd6-e1a5-4733-8ae9-387ed13a8847" Sep 13 00:15:29.674696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280447380.mount: Deactivated successfully. Sep 13 00:15:33.336181 containerd[1576]: time="2025-09-13T00:15:33.335372805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:33.373260 containerd[1576]: time="2025-09-13T00:15:33.373113908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:15:33.535957 containerd[1576]: time="2025-09-13T00:15:33.535877671Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:33.564996 containerd[1576]: time="2025-09-13T00:15:33.564905652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:33.565781 containerd[1576]: time="2025-09-13T00:15:33.565741724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.98787815s" Sep 13 00:15:33.565781 containerd[1576]: time="2025-09-13T00:15:33.565776689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:15:33.577048 containerd[1576]: time="2025-09-13T00:15:33.576923667Z" level=info msg="CreateContainer within sandbox \"93ec7d90b884d9747b07e72ef2661c32a0e0bfdb7baae17824d7208b22823db0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:15:33.908112 containerd[1576]: time="2025-09-13T00:15:33.907949729Z" level=info msg="CreateContainer within sandbox \"93ec7d90b884d9747b07e72ef2661c32a0e0bfdb7baae17824d7208b22823db0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"192e5397d38f384506eb0a82baedea7c914308844c7a29b526202c7f932dfea0\"" Sep 13 00:15:33.908828 containerd[1576]: time="2025-09-13T00:15:33.908778588Z" level=info msg="StartContainer for \"192e5397d38f384506eb0a82baedea7c914308844c7a29b526202c7f932dfea0\"" Sep 13 00:15:34.211474 containerd[1576]: time="2025-09-13T00:15:34.206011029Z" level=info msg="StartContainer for \"192e5397d38f384506eb0a82baedea7c914308844c7a29b526202c7f932dfea0\" returns successfully" Sep 13 00:15:34.250229 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:15:34.254829 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:15:34.375984 containerd[1576]: time="2025-09-13T00:15:34.375929926Z" level=info msg="StopPodSandbox for \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\"" Sep 13 00:15:34.600059 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:56536.service - OpenSSH per-connection server daemon (10.0.0.1:56536). Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.507 [INFO][4030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.509 [INFO][4030] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" iface="eth0" netns="/var/run/netns/cni-eee34529-6133-2d03-c145-04f512f104ce" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.510 [INFO][4030] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" iface="eth0" netns="/var/run/netns/cni-eee34529-6133-2d03-c145-04f512f104ce" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.515 [INFO][4030] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" iface="eth0" netns="/var/run/netns/cni-eee34529-6133-2d03-c145-04f512f104ce" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.517 [INFO][4030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.517 [INFO][4030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.632 [INFO][4040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.633 [INFO][4040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.633 [INFO][4040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.641 [WARNING][4040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.641 [INFO][4040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.643 [INFO][4040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:34.653393 containerd[1576]: 2025-09-13 00:15:34.647 [INFO][4030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:34.654641 containerd[1576]: time="2025-09-13T00:15:34.654585008Z" level=info msg="TearDown network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\" successfully" Sep 13 00:15:34.654641 containerd[1576]: time="2025-09-13T00:15:34.654635864Z" level=info msg="StopPodSandbox for \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\" returns successfully" Sep 13 00:15:34.659961 systemd[1]: run-netns-cni\x2deee34529\x2d6133\x2d2d03\x2dc145\x2d04f512f104ce.mount: Deactivated successfully. Sep 13 00:15:34.683064 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 56536 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:15:34.685851 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:34.692278 systemd-logind[1564]: New session 8 of user core. Sep 13 00:15:34.706886 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:15:34.707794 kubelet[2715]: I0913 00:15:34.707420 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2zgbl" podStartSLOduration=2.354994123 podStartE2EDuration="24.707394931s" podCreationTimestamp="2025-09-13 00:15:10 +0000 UTC" firstStartedPulling="2025-09-13 00:15:11.21412973 +0000 UTC m=+18.877058247" lastFinishedPulling="2025-09-13 00:15:33.566530538 +0000 UTC m=+41.229459055" observedRunningTime="2025-09-13 00:15:34.707033135 +0000 UTC m=+42.369961682" watchObservedRunningTime="2025-09-13 00:15:34.707394931 +0000 UTC m=+42.370323448" Sep 13 00:15:34.751243 kubelet[2715]: I0913 00:15:34.751155 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-backend-key-pair\") pod \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\" (UID: \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\") " Sep 13 00:15:34.751243 kubelet[2715]: I0913 00:15:34.751219 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-ca-bundle\") pod \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\" (UID: \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\") " Sep 13 00:15:34.751243 kubelet[2715]: I0913 00:15:34.751248 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgt6c\" (UniqueName: \"kubernetes.io/projected/256d1bd6-e1a5-4733-8ae9-387ed13a8847-kube-api-access-mgt6c\") pod \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\" (UID: \"256d1bd6-e1a5-4733-8ae9-387ed13a8847\") " Sep 13 00:15:34.751974 kubelet[2715]: I0913 00:15:34.751852 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "256d1bd6-e1a5-4733-8ae9-387ed13a8847" (UID: "256d1bd6-e1a5-4733-8ae9-387ed13a8847"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:15:34.755886 kubelet[2715]: I0913 00:15:34.755829 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "256d1bd6-e1a5-4733-8ae9-387ed13a8847" (UID: "256d1bd6-e1a5-4733-8ae9-387ed13a8847"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:15:34.756803 kubelet[2715]: I0913 00:15:34.756730 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256d1bd6-e1a5-4733-8ae9-387ed13a8847-kube-api-access-mgt6c" (OuterVolumeSpecName: "kube-api-access-mgt6c") pod "256d1bd6-e1a5-4733-8ae9-387ed13a8847" (UID: "256d1bd6-e1a5-4733-8ae9-387ed13a8847"). InnerVolumeSpecName "kube-api-access-mgt6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:15:34.758547 systemd[1]: var-lib-kubelet-pods-256d1bd6\x2de1a5\x2d4733\x2d8ae9\x2d387ed13a8847-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgt6c.mount: Deactivated successfully. Sep 13 00:15:34.758823 systemd[1]: var-lib-kubelet-pods-256d1bd6\x2de1a5\x2d4733\x2d8ae9\x2d387ed13a8847-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:15:34.852748 kubelet[2715]: I0913 00:15:34.852585 2715 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:15:34.852748 kubelet[2715]: I0913 00:15:34.852630 2715 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/256d1bd6-e1a5-4733-8ae9-387ed13a8847-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:15:34.852748 kubelet[2715]: I0913 00:15:34.852640 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgt6c\" (UniqueName: \"kubernetes.io/projected/256d1bd6-e1a5-4733-8ae9-387ed13a8847-kube-api-access-mgt6c\") on node \"localhost\" DevicePath \"\"" Sep 13 00:15:34.871482 sshd[4046]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:34.877384 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:56536.service: Deactivated successfully. Sep 13 00:15:34.880825 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:15:34.880998 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:15:34.882995 systemd-logind[1564]: Removed session 8. Sep 13 00:15:35.156430 kubelet[2715]: I0913 00:15:35.156177 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26b7f96a-7479-46cf-ae66-c2eff9c9eb3a-whisker-ca-bundle\") pod \"whisker-9794c5674-qhgq9\" (UID: \"26b7f96a-7479-46cf-ae66-c2eff9c9eb3a\") " pod="calico-system/whisker-9794c5674-qhgq9" Sep 13 00:15:35.156587 kubelet[2715]: I0913 00:15:35.156469 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26b7f96a-7479-46cf-ae66-c2eff9c9eb3a-whisker-backend-key-pair\") pod \"whisker-9794c5674-qhgq9\" (UID: \"26b7f96a-7479-46cf-ae66-c2eff9c9eb3a\") " pod="calico-system/whisker-9794c5674-qhgq9" Sep 13 00:15:35.156587 kubelet[2715]: I0913 00:15:35.156499 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzksh\" (UniqueName: \"kubernetes.io/projected/26b7f96a-7479-46cf-ae66-c2eff9c9eb3a-kube-api-access-wzksh\") pod \"whisker-9794c5674-qhgq9\" (UID: \"26b7f96a-7479-46cf-ae66-c2eff9c9eb3a\") " pod="calico-system/whisker-9794c5674-qhgq9" Sep 13 00:15:35.370516 containerd[1576]: time="2025-09-13T00:15:35.370442768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9794c5674-qhgq9,Uid:26b7f96a-7479-46cf-ae66-c2eff9c9eb3a,Namespace:calico-system,Attempt:0,}" Sep 13 00:15:35.763854 systemd-networkd[1251]: cali64c2f8c7666: Link UP Sep 13 00:15:35.767678 systemd-networkd[1251]: cali64c2f8c7666: Gained carrier Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.638 [INFO][4081] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.650 [INFO][4081] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--9794c5674--qhgq9-eth0 whisker-9794c5674- calico-system 26b7f96a-7479-46cf-ae66-c2eff9c9eb3a 934 0 2025-09-13 00:15:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9794c5674 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-9794c5674-qhgq9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali64c2f8c7666 [] [] }} ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.650 [INFO][4081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-eth0" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.684 [INFO][4095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" HandleID="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Workload="localhost-k8s-whisker--9794c5674--qhgq9-eth0" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.685 [INFO][4095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" HandleID="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Workload="localhost-k8s-whisker--9794c5674--qhgq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-9794c5674-qhgq9", "timestamp":"2025-09-13 00:15:35.684013107 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.685 [INFO][4095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.685 [INFO][4095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.685 [INFO][4095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.694 [INFO][4095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.704 [INFO][4095] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.713 [INFO][4095] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.718 [INFO][4095] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.722 [INFO][4095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.722 [INFO][4095] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.726 [INFO][4095] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.735 [INFO][4095] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.745 [INFO][4095] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.745 [INFO][4095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" host="localhost" Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.745 [INFO][4095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:35.795842 containerd[1576]: 2025-09-13 00:15:35.745 [INFO][4095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" HandleID="k8s-pod-network.29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Workload="localhost-k8s-whisker--9794c5674--qhgq9-eth0" Sep 13 00:15:35.797172 containerd[1576]: 2025-09-13 00:15:35.750 [INFO][4081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9794c5674--qhgq9-eth0", GenerateName:"whisker-9794c5674-", Namespace:"calico-system", SelfLink:"", UID:"26b7f96a-7479-46cf-ae66-c2eff9c9eb3a", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9794c5674", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-9794c5674-qhgq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64c2f8c7666", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:35.797172 containerd[1576]: 2025-09-13 00:15:35.750 [INFO][4081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-eth0" Sep 13 00:15:35.797172 containerd[1576]: 2025-09-13 00:15:35.751 [INFO][4081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64c2f8c7666 ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-eth0" Sep 13 00:15:35.797172 containerd[1576]: 2025-09-13 00:15:35.768 [INFO][4081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-eth0" Sep 13 00:15:35.797172 containerd[1576]: 2025-09-13 00:15:35.769 [INFO][4081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9794c5674--qhgq9-eth0", GenerateName:"whisker-9794c5674-", Namespace:"calico-system", SelfLink:"", UID:"26b7f96a-7479-46cf-ae66-c2eff9c9eb3a", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9794c5674", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b", Pod:"whisker-9794c5674-qhgq9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64c2f8c7666", MAC:"de:e3:ab:b6:9c:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:35.797172 containerd[1576]: 2025-09-13 00:15:35.789 [INFO][4081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b" Namespace="calico-system" Pod="whisker-9794c5674-qhgq9" WorkloadEndpoint="localhost-k8s-whisker--9794c5674--qhgq9-eth0" Sep 13 00:15:35.858889 containerd[1576]: time="2025-09-13T00:15:35.858680114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:35.859077 containerd[1576]: time="2025-09-13T00:15:35.858917538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:35.859077 containerd[1576]: time="2025-09-13T00:15:35.858963263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:35.859422 containerd[1576]: time="2025-09-13T00:15:35.859364162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:35.919094 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:35.984070 containerd[1576]: time="2025-09-13T00:15:35.984005005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9794c5674-qhgq9,Uid:26b7f96a-7479-46cf-ae66-c2eff9c9eb3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b\"" Sep 13 00:15:35.992092 containerd[1576]: time="2025-09-13T00:15:35.992039087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:15:36.105355 kernel: bpftool[4283]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:15:36.392730 systemd-networkd[1251]: vxlan.calico: Link UP Sep 13 00:15:36.392742 systemd-networkd[1251]: vxlan.calico: Gained carrier Sep 13 00:15:36.454543 containerd[1576]: time="2025-09-13T00:15:36.454492510Z" level=info msg="StopPodSandbox for \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\"" Sep 13 00:15:36.455446 containerd[1576]: time="2025-09-13T00:15:36.455415404Z" level=info msg="StopPodSandbox for \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\"" Sep 13 00:15:36.456080 containerd[1576]: time="2025-09-13T00:15:36.455424431Z" level=info msg="StopPodSandbox for \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\"" Sep 13 00:15:36.457358 kubelet[2715]: I0913 00:15:36.457284 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256d1bd6-e1a5-4733-8ae9-387ed13a8847" path="/var/lib/kubelet/pods/256d1bd6-e1a5-4733-8ae9-387ed13a8847/volumes" Sep 13 00:15:36.458271 containerd[1576]: time="2025-09-13T00:15:36.455632701Z" level=info msg="StopPodSandbox for \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\"" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.540 [INFO][4371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.541 [INFO][4371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" iface="eth0" netns="/var/run/netns/cni-17c9df31-d69e-e00c-5101-fdb96f739a17" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.541 [INFO][4371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" iface="eth0" netns="/var/run/netns/cni-17c9df31-d69e-e00c-5101-fdb96f739a17" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.541 [INFO][4371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" iface="eth0" netns="/var/run/netns/cni-17c9df31-d69e-e00c-5101-fdb96f739a17" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.542 [INFO][4371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.542 [INFO][4371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.608 [INFO][4392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.609 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.609 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.617 [WARNING][4392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.617 [INFO][4392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.619 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:36.637567 containerd[1576]: 2025-09-13 00:15:36.627 [INFO][4371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:36.639995 containerd[1576]: time="2025-09-13T00:15:36.638588894Z" level=info msg="TearDown network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\" successfully" Sep 13 00:15:36.639995 containerd[1576]: time="2025-09-13T00:15:36.638628909Z" level=info msg="StopPodSandbox for \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\" returns successfully" Sep 13 00:15:36.639995 containerd[1576]: time="2025-09-13T00:15:36.639540533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-vwqmg,Uid:1571c160-5cc3-4360-89ff-652e3fc95a50,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.554 [INFO][4352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.555 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" iface="eth0" netns="/var/run/netns/cni-c89e59d1-76c6-d0bb-9bae-0ede4b3d7ca7" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.555 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" iface="eth0" netns="/var/run/netns/cni-c89e59d1-76c6-d0bb-9bae-0ede4b3d7ca7" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.556 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" iface="eth0" netns="/var/run/netns/cni-c89e59d1-76c6-d0bb-9bae-0ede4b3d7ca7" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.556 [INFO][4352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.556 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.615 [INFO][4400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.615 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.619 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.629 [WARNING][4400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.629 [INFO][4400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.630 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:36.639995 containerd[1576]: 2025-09-13 00:15:36.636 [INFO][4352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:36.640645 containerd[1576]: time="2025-09-13T00:15:36.640177784Z" level=info msg="TearDown network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\" successfully" Sep 13 00:15:36.640645 containerd[1576]: time="2025-09-13T00:15:36.640205566Z" level=info msg="StopPodSandbox for \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\" returns successfully" Sep 13 00:15:36.641137 kubelet[2715]: E0913 00:15:36.641105 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:36.641746 containerd[1576]: time="2025-09-13T00:15:36.641505565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rbsdl,Uid:10029b49-ba84-4580-85a1-aa34fcb74230,Namespace:kube-system,Attempt:1,}" Sep 13 00:15:36.642058 systemd[1]: run-netns-cni\x2d17c9df31\x2dd69e\x2de00c\x2d5101\x2dfdb96f739a17.mount: Deactivated successfully. Sep 13 00:15:36.648727 systemd[1]: run-netns-cni\x2dc89e59d1\x2d76c6\x2dd0bb\x2d9bae\x2d0ede4b3d7ca7.mount: Deactivated successfully. Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.573 [INFO][4360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.573 [INFO][4360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" iface="eth0" netns="/var/run/netns/cni-851b7f7b-2ad3-b9e1-c7bf-d2282e95f7ca" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.573 [INFO][4360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" iface="eth0" netns="/var/run/netns/cni-851b7f7b-2ad3-b9e1-c7bf-d2282e95f7ca" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.574 [INFO][4360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" iface="eth0" netns="/var/run/netns/cni-851b7f7b-2ad3-b9e1-c7bf-d2282e95f7ca" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.574 [INFO][4360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.574 [INFO][4360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.621 [INFO][4411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.621 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.630 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.643 [WARNING][4411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.644 [INFO][4411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.675 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:36.684075 containerd[1576]: 2025-09-13 00:15:36.679 [INFO][4360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:36.687867 containerd[1576]: time="2025-09-13T00:15:36.686474209Z" level=info msg="TearDown network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\" successfully" Sep 13 00:15:36.687867 containerd[1576]: time="2025-09-13T00:15:36.686501590Z" level=info msg="StopPodSandbox for \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\" returns successfully" Sep 13 00:15:36.687867 containerd[1576]: time="2025-09-13T00:15:36.687713104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zxh96,Uid:f7af872b-9925-42a7-9b22-c2f24c9b6443,Namespace:calico-system,Attempt:1,}" Sep 13 00:15:36.694297 systemd[1]: run-netns-cni\x2d851b7f7b\x2d2ad3\x2db9e1\x2dc7bf\x2dd2282e95f7ca.mount: Deactivated successfully. Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.588 [INFO][4372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.592 [INFO][4372] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" iface="eth0" netns="/var/run/netns/cni-eb6d9363-3762-f05f-3195-0975ab41754b" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.593 [INFO][4372] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" iface="eth0" netns="/var/run/netns/cni-eb6d9363-3762-f05f-3195-0975ab41754b" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.593 [INFO][4372] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" iface="eth0" netns="/var/run/netns/cni-eb6d9363-3762-f05f-3195-0975ab41754b" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.593 [INFO][4372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.593 [INFO][4372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.656 [INFO][4417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.656 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.675 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.686 [WARNING][4417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.686 [INFO][4417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.690 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:36.702162 containerd[1576]: 2025-09-13 00:15:36.697 [INFO][4372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:36.705498 containerd[1576]: time="2025-09-13T00:15:36.705440516Z" level=info msg="TearDown network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\" successfully" Sep 13 00:15:36.705498 containerd[1576]: time="2025-09-13T00:15:36.705493395Z" level=info msg="StopPodSandbox for \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\" returns successfully" Sep 13 00:15:36.706523 kubelet[2715]: E0913 00:15:36.706249 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:36.706477 systemd[1]: run-netns-cni\x2deb6d9363\x2d3762\x2df05f\x2d3195\x2d0975ab41754b.mount: Deactivated successfully. Sep 13 00:15:36.708163 containerd[1576]: time="2025-09-13T00:15:36.708126846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jvqvw,Uid:feb52a36-15cd-4f86-a703-48f9b874433f,Namespace:kube-system,Attempt:1,}" Sep 13 00:15:36.946657 systemd-networkd[1251]: cali125b13fbcff: Link UP Sep 13 00:15:36.948083 systemd-networkd[1251]: cali125b13fbcff: Gained carrier Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.834 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0 calico-apiserver-d5556b5db- calico-apiserver 1571c160-5cc3-4360-89ff-652e3fc95a50 953 0 2025-09-13 00:15:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5556b5db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5556b5db-vwqmg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali125b13fbcff [] [] }} ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.837 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.888 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" HandleID="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.888 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" HandleID="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d5556b5db-vwqmg", "timestamp":"2025-09-13 00:15:36.886404446 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.888 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.888 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.888 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.896 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.902 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.910 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.912 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.916 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.916 [INFO][4516] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.920 [INFO][4516] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12 Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.926 [INFO][4516] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.937 [INFO][4516] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.937 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" host="localhost" Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.937 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:36.970142 containerd[1576]: 2025-09-13 00:15:36.937 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" HandleID="k8s-pod-network.14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.971094 containerd[1576]: 2025-09-13 00:15:36.941 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1571c160-5cc3-4360-89ff-652e3fc95a50", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5556b5db-vwqmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali125b13fbcff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:36.971094 containerd[1576]: 2025-09-13 00:15:36.942 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.971094 containerd[1576]: 2025-09-13 00:15:36.942 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali125b13fbcff ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.971094 containerd[1576]: 2025-09-13 00:15:36.947 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:36.971094 containerd[1576]: 2025-09-13 00:15:36.948 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1571c160-5cc3-4360-89ff-652e3fc95a50", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12", Pod:"calico-apiserver-d5556b5db-vwqmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali125b13fbcff", MAC:"76:a0:de:37:69:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:36.971094 containerd[1576]: 2025-09-13 00:15:36.963 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-vwqmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:37.000427 containerd[1576]: time="2025-09-13T00:15:37.000270725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:37.000849 containerd[1576]: time="2025-09-13T00:15:37.000705618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:37.000849 containerd[1576]: time="2025-09-13T00:15:37.000785738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.001290 containerd[1576]: time="2025-09-13T00:15:37.001177000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.049242 systemd-networkd[1251]: calia84207255cf: Link UP Sep 13 00:15:37.050506 systemd-networkd[1251]: calia84207255cf: Gained carrier Sep 13 00:15:37.052282 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.845 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0 coredns-7c65d6cfc9- kube-system 10029b49-ba84-4580-85a1-aa34fcb74230 954 0 2025-09-13 00:14:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-rbsdl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia84207255cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.846 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.903 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" HandleID="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.903 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" HandleID="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367850), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-rbsdl", "timestamp":"2025-09-13 00:15:36.90299879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.903 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.937 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.937 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:36.998 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.004 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.011 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.013 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.016 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.016 [INFO][4528] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.020 [INFO][4528] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58 Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.030 [INFO][4528] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.038 [INFO][4528] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.038 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" host="localhost" Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.038 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:37.074456 containerd[1576]: 2025-09-13 00:15:37.038 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" HandleID="k8s-pod-network.d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:37.075333 containerd[1576]: 2025-09-13 00:15:37.043 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10029b49-ba84-4580-85a1-aa34fcb74230", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-rbsdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia84207255cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.075333 containerd[1576]: 2025-09-13 00:15:37.043 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:37.075333 containerd[1576]: 2025-09-13 00:15:37.043 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia84207255cf ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:37.075333 containerd[1576]: 2025-09-13 00:15:37.051 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:37.075333 containerd[1576]: 2025-09-13 00:15:37.052 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10029b49-ba84-4580-85a1-aa34fcb74230", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58", Pod:"coredns-7c65d6cfc9-rbsdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia84207255cf", MAC:"6e:97:82:eb:49:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.075333 containerd[1576]: 2025-09-13 00:15:37.069 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rbsdl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:37.106554 containerd[1576]: time="2025-09-13T00:15:37.106389206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-vwqmg,Uid:1571c160-5cc3-4360-89ff-652e3fc95a50,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12\"" Sep 13 00:15:37.122642 containerd[1576]: time="2025-09-13T00:15:37.122511234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:37.122642 containerd[1576]: time="2025-09-13T00:15:37.122600091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:37.122642 containerd[1576]: time="2025-09-13T00:15:37.122616841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.122970 containerd[1576]: time="2025-09-13T00:15:37.122759709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.156881 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:37.163454 systemd-networkd[1251]: caliba1177c4dce: Link UP Sep 13 00:15:37.166627 systemd-networkd[1251]: caliba1177c4dce: Gained carrier Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:36.869 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--zxh96-eth0 goldmane-7988f88666- calico-system f7af872b-9925-42a7-9b22-c2f24c9b6443 955 0 2025-09-13 00:15:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-zxh96 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliba1177c4dce [] [] }} ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:36.871 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:36.919 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" HandleID="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:36.919 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" HandleID="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-zxh96", "timestamp":"2025-09-13 00:15:36.919200771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:36.919 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.038 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.039 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.105 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.114 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.122 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.125 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.128 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.128 [INFO][4537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.130 [INFO][4537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.137 [INFO][4537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.145 [INFO][4537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.146 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" host="localhost" Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.146 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:37.192344 containerd[1576]: 2025-09-13 00:15:37.146 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" HandleID="k8s-pod-network.782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:37.193653 containerd[1576]: 2025-09-13 00:15:37.153 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--zxh96-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f7af872b-9925-42a7-9b22-c2f24c9b6443", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-zxh96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba1177c4dce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.193653 containerd[1576]: 2025-09-13 00:15:37.154 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:37.193653 containerd[1576]: 2025-09-13 00:15:37.154 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba1177c4dce ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:37.193653 containerd[1576]: 2025-09-13 00:15:37.169 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:37.193653 containerd[1576]: 2025-09-13 00:15:37.170 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--zxh96-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f7af872b-9925-42a7-9b22-c2f24c9b6443", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c", Pod:"goldmane-7988f88666-zxh96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba1177c4dce", MAC:"06:6d:44:62:00:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.193653 containerd[1576]: 2025-09-13 00:15:37.184 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c" Namespace="calico-system" Pod="goldmane-7988f88666-zxh96" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:37.197112 containerd[1576]: time="2025-09-13T00:15:37.196972237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rbsdl,Uid:10029b49-ba84-4580-85a1-aa34fcb74230,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58\"" Sep 13 00:15:37.198471 kubelet[2715]: E0913 00:15:37.198393 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:37.201564 containerd[1576]: time="2025-09-13T00:15:37.201431453Z" level=info msg="CreateContainer within sandbox \"d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:15:37.238582 containerd[1576]: time="2025-09-13T00:15:37.233524746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:37.238582 containerd[1576]: time="2025-09-13T00:15:37.233610847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:37.238582 containerd[1576]: time="2025-09-13T00:15:37.233626576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.238582 containerd[1576]: time="2025-09-13T00:15:37.233771547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.247080 containerd[1576]: time="2025-09-13T00:15:37.246998763Z" level=info msg="CreateContainer within sandbox \"d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45050ed012613ee63cbafed77d69a2071d87ed2c746ec28e4011cf823186e637\"" Sep 13 00:15:37.249176 containerd[1576]: time="2025-09-13T00:15:37.249084693Z" level=info msg="StartContainer for \"45050ed012613ee63cbafed77d69a2071d87ed2c746ec28e4011cf823186e637\"" Sep 13 00:15:37.265664 systemd-networkd[1251]: calia6b2801c739: Link UP Sep 13 00:15:37.266608 systemd-networkd[1251]: calia6b2801c739: Gained carrier Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:36.870 [INFO][4497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0 coredns-7c65d6cfc9- kube-system feb52a36-15cd-4f86-a703-48f9b874433f 956 0 2025-09-13 00:14:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-jvqvw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6b2801c739 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:36.870 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:36.932 [INFO][4538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" HandleID="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:36.932 [INFO][4538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" HandleID="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013fc20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-jvqvw", "timestamp":"2025-09-13 00:15:36.932273071 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:36.932 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.146 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.146 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.200 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.215 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.223 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.227 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.231 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.231 [INFO][4538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.235 [INFO][4538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889 Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.243 [INFO][4538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.254 [INFO][4538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.254 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" host="localhost" Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.254 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:37.287778 containerd[1576]: 2025-09-13 00:15:37.254 [INFO][4538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" HandleID="k8s-pod-network.bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:37.288528 containerd[1576]: 2025-09-13 00:15:37.259 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"feb52a36-15cd-4f86-a703-48f9b874433f", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-jvqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6b2801c739", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.288528 containerd[1576]: 2025-09-13 00:15:37.260 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:37.288528 containerd[1576]: 2025-09-13 00:15:37.260 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6b2801c739 ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:37.288528 containerd[1576]: 2025-09-13 00:15:37.267 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:37.288528 containerd[1576]: 2025-09-13 00:15:37.268 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"feb52a36-15cd-4f86-a703-48f9b874433f", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889", Pod:"coredns-7c65d6cfc9-jvqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6b2801c739", MAC:"4e:a9:b6:50:e4:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.288528 containerd[1576]: 2025-09-13 00:15:37.282 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jvqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:37.298606 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:37.348364 systemd-networkd[1251]: cali64c2f8c7666: Gained IPv6LL Sep 13 00:15:37.354414 containerd[1576]: time="2025-09-13T00:15:37.354367903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zxh96,Uid:f7af872b-9925-42a7-9b22-c2f24c9b6443,Namespace:calico-system,Attempt:1,} returns sandbox id \"782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c\"" Sep 13 00:15:37.366489 containerd[1576]: time="2025-09-13T00:15:37.365712791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:37.366489 containerd[1576]: time="2025-09-13T00:15:37.365904147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:37.366489 containerd[1576]: time="2025-09-13T00:15:37.365941608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.366489 containerd[1576]: time="2025-09-13T00:15:37.366061272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.384656 containerd[1576]: time="2025-09-13T00:15:37.384582354Z" level=info msg="StartContainer for \"45050ed012613ee63cbafed77d69a2071d87ed2c746ec28e4011cf823186e637\" returns successfully" Sep 13 00:15:37.401643 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:37.438688 containerd[1576]: time="2025-09-13T00:15:37.438619307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jvqvw,Uid:feb52a36-15cd-4f86-a703-48f9b874433f,Namespace:kube-system,Attempt:1,} returns sandbox id \"bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889\"" Sep 13 00:15:37.439809 kubelet[2715]: E0913 00:15:37.439763 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:37.442541 containerd[1576]: time="2025-09-13T00:15:37.442456430Z" level=info msg="CreateContainer within sandbox \"bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:15:37.454666 containerd[1576]: time="2025-09-13T00:15:37.454515693Z" level=info msg="StopPodSandbox for \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\"" Sep 13 00:15:37.492902 containerd[1576]: time="2025-09-13T00:15:37.492648085Z" level=info msg="CreateContainer within sandbox \"bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"102ad454ad0ed029c654758c63c43806b41ec6bad75bab7dc13d2d3129fb9e4a\"" Sep 13 00:15:37.496959 containerd[1576]: time="2025-09-13T00:15:37.496826777Z" level=info msg="StartContainer for \"102ad454ad0ed029c654758c63c43806b41ec6bad75bab7dc13d2d3129fb9e4a\"" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.537 [INFO][4802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.537 [INFO][4802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" iface="eth0" netns="/var/run/netns/cni-e9f0b942-2470-8197-c915-8ace9568eac4" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.537 [INFO][4802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" iface="eth0" netns="/var/run/netns/cni-e9f0b942-2470-8197-c915-8ace9568eac4" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.537 [INFO][4802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" iface="eth0" netns="/var/run/netns/cni-e9f0b942-2470-8197-c915-8ace9568eac4" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.537 [INFO][4802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.537 [INFO][4802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.573 [INFO][4828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.573 [INFO][4828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.573 [INFO][4828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.583 [WARNING][4828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.583 [INFO][4828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.588 [INFO][4828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:37.598690 containerd[1576]: 2025-09-13 00:15:37.593 [INFO][4802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:37.600759 containerd[1576]: time="2025-09-13T00:15:37.599288993Z" level=info msg="TearDown network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\" successfully" Sep 13 00:15:37.600759 containerd[1576]: time="2025-09-13T00:15:37.599353734Z" level=info msg="StopPodSandbox for \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\" returns successfully" Sep 13 00:15:37.600759 containerd[1576]: time="2025-09-13T00:15:37.600152036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-ljn92,Uid:e0877890-9528-461c-9cba-50fe493297ef,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:15:37.611576 containerd[1576]: time="2025-09-13T00:15:37.611490983Z" level=info msg="StartContainer for \"102ad454ad0ed029c654758c63c43806b41ec6bad75bab7dc13d2d3129fb9e4a\" returns successfully" Sep 13 00:15:37.659805 systemd[1]: run-netns-cni\x2de9f0b942\x2d2470\x2d8197\x2dc915\x2d8ace9568eac4.mount: Deactivated successfully. Sep 13 00:15:37.705398 kubelet[2715]: E0913 00:15:37.704675 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:37.715049 kubelet[2715]: E0913 00:15:37.715012 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:37.795895 kubelet[2715]: I0913 00:15:37.794999 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rbsdl" podStartSLOduration=39.794974836 podStartE2EDuration="39.794974836s" podCreationTimestamp="2025-09-13 00:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:15:37.754837145 +0000 UTC m=+45.417765662" watchObservedRunningTime="2025-09-13 00:15:37.794974836 +0000 UTC m=+45.457903353" Sep 13 00:15:37.888629 systemd-networkd[1251]: cali6ee9aa01d69: Link UP Sep 13 00:15:37.889254 systemd-networkd[1251]: cali6ee9aa01d69: Gained carrier Sep 13 00:15:37.902266 kubelet[2715]: I0913 00:15:37.901386 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jvqvw" podStartSLOduration=39.901357431 podStartE2EDuration="39.901357431s" podCreationTimestamp="2025-09-13 00:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:15:37.796459171 +0000 UTC m=+45.459387708" watchObservedRunningTime="2025-09-13 00:15:37.901357431 +0000 UTC m=+45.564285949" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.761 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0 calico-apiserver-d5556b5db- calico-apiserver e0877890-9528-461c-9cba-50fe493297ef 986 0 2025-09-13 00:15:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5556b5db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5556b5db-ljn92 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6ee9aa01d69 [] [] }} ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.763 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.827 [INFO][4881] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" HandleID="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.827 [INFO][4881] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" HandleID="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d5556b5db-ljn92", "timestamp":"2025-09-13 00:15:37.827398167 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.827 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.827 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.827 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.836 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.846 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.854 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.857 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.860 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.860 [INFO][4881] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.863 [INFO][4881] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6 Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.870 [INFO][4881] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.879 [INFO][4881] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.879 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" host="localhost" Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.879 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:37.905431 containerd[1576]: 2025-09-13 00:15:37.879 [INFO][4881] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" HandleID="k8s-pod-network.47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.906049 containerd[1576]: 2025-09-13 00:15:37.883 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0877890-9528-461c-9cba-50fe493297ef", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5556b5db-ljn92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ee9aa01d69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.906049 containerd[1576]: 2025-09-13 00:15:37.883 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.906049 containerd[1576]: 2025-09-13 00:15:37.883 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ee9aa01d69 ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.906049 containerd[1576]: 2025-09-13 00:15:37.888 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.906049 containerd[1576]: 2025-09-13 00:15:37.889 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0877890-9528-461c-9cba-50fe493297ef", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6", Pod:"calico-apiserver-d5556b5db-ljn92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ee9aa01d69", MAC:"e2:a8:c1:56:82:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:37.906049 containerd[1576]: 2025-09-13 00:15:37.900 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6" Namespace="calico-apiserver" Pod="calico-apiserver-d5556b5db-ljn92" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:37.938438 containerd[1576]: time="2025-09-13T00:15:37.938250156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:37.938438 containerd[1576]: time="2025-09-13T00:15:37.938404775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:37.938668 containerd[1576]: time="2025-09-13T00:15:37.938473714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.938668 containerd[1576]: time="2025-09-13T00:15:37.938631188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:37.975562 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:38.016656 containerd[1576]: time="2025-09-13T00:15:38.016591890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5556b5db-ljn92,Uid:e0877890-9528-461c-9cba-50fe493297ef,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6\"" Sep 13 00:15:38.072617 containerd[1576]: time="2025-09-13T00:15:38.072528245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:38.073605 containerd[1576]: time="2025-09-13T00:15:38.073509108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:15:38.075420 containerd[1576]: time="2025-09-13T00:15:38.075367213Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:38.080069 containerd[1576]: time="2025-09-13T00:15:38.079985448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:38.082678 containerd[1576]: time="2025-09-13T00:15:38.082631064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.090544028s" Sep 13 00:15:38.082775 containerd[1576]: time="2025-09-13T00:15:38.082680626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:15:38.083762 containerd[1576]: time="2025-09-13T00:15:38.083721413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:15:38.086311 containerd[1576]: time="2025-09-13T00:15:38.086260890Z" level=info msg="CreateContainer within sandbox \"29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:15:38.105188 containerd[1576]: time="2025-09-13T00:15:38.105134209Z" level=info msg="CreateContainer within sandbox \"29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7f3ff4c6fd21c162dc548873cfac69167a05f75e6638372520b85acdfa8bf659\"" Sep 13 00:15:38.106106 containerd[1576]: time="2025-09-13T00:15:38.106057364Z" level=info msg="StartContainer for \"7f3ff4c6fd21c162dc548873cfac69167a05f75e6638372520b85acdfa8bf659\"" Sep 13 00:15:38.179683 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Sep 13 00:15:38.201456 containerd[1576]: time="2025-09-13T00:15:38.201407031Z" level=info msg="StartContainer for \"7f3ff4c6fd21c162dc548873cfac69167a05f75e6638372520b85acdfa8bf659\" returns successfully" Sep 13 00:15:38.306602 systemd-networkd[1251]: cali125b13fbcff: Gained IPv6LL Sep 13 00:15:38.626495 systemd-networkd[1251]: caliba1177c4dce: Gained IPv6LL Sep 13 00:15:38.627551 systemd-networkd[1251]: calia84207255cf: Gained IPv6LL Sep 13 00:15:38.720799 kubelet[2715]: E0913 00:15:38.720381 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:38.720799 kubelet[2715]: E0913 00:15:38.720492 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:39.138490 systemd-networkd[1251]: calia6b2801c739: Gained IPv6LL Sep 13 00:15:39.454891 containerd[1576]: time="2025-09-13T00:15:39.454800853Z" level=info msg="StopPodSandbox for \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\"" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.508 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.510 [INFO][5001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" iface="eth0" netns="/var/run/netns/cni-b9bf18a4-32ef-f77a-96ff-aab3b84c583e" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.511 [INFO][5001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" iface="eth0" netns="/var/run/netns/cni-b9bf18a4-32ef-f77a-96ff-aab3b84c583e" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.511 [INFO][5001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" iface="eth0" netns="/var/run/netns/cni-b9bf18a4-32ef-f77a-96ff-aab3b84c583e" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.511 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.511 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.543 [INFO][5010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.543 [INFO][5010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.543 [INFO][5010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.550 [WARNING][5010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.550 [INFO][5010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.553 [INFO][5010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:39.560584 containerd[1576]: 2025-09-13 00:15:39.556 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:39.561238 containerd[1576]: time="2025-09-13T00:15:39.560807950Z" level=info msg="TearDown network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\" successfully" Sep 13 00:15:39.561238 containerd[1576]: time="2025-09-13T00:15:39.560844881Z" level=info msg="StopPodSandbox for \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\" returns successfully" Sep 13 00:15:39.563656 containerd[1576]: time="2025-09-13T00:15:39.563612373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxtsh,Uid:aadb0960-6652-43de-9a7a-a3600967a9bb,Namespace:calico-system,Attempt:1,}" Sep 13 00:15:39.565301 systemd[1]: run-netns-cni\x2db9bf18a4\x2d32ef\x2df77a\x2d96ff\x2daab3b84c583e.mount: Deactivated successfully. Sep 13 00:15:39.650607 systemd-networkd[1251]: cali6ee9aa01d69: Gained IPv6LL Sep 13 00:15:39.723458 kubelet[2715]: E0913 00:15:39.723304 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:39.723984 kubelet[2715]: E0913 00:15:39.723523 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:15:39.879624 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:56546.service - OpenSSH per-connection server daemon (10.0.0.1:56546). Sep 13 00:15:40.321992 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 56546 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:15:40.324291 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:40.329660 systemd-logind[1564]: New session 9 of user core. Sep 13 00:15:40.337665 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:15:40.418441 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:15:40.418511 systemd-resolved[1472]: Flushed all caches. Sep 13 00:15:40.420350 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:15:40.454638 containerd[1576]: time="2025-09-13T00:15:40.454581201Z" level=info msg="StopPodSandbox for \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\"" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.954 [INFO][5043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.955 [INFO][5043] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" iface="eth0" netns="/var/run/netns/cni-1814c0b1-16a3-7dff-2b14-43ac2c949738" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.955 [INFO][5043] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" iface="eth0" netns="/var/run/netns/cni-1814c0b1-16a3-7dff-2b14-43ac2c949738" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.955 [INFO][5043] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" iface="eth0" netns="/var/run/netns/cni-1814c0b1-16a3-7dff-2b14-43ac2c949738" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.955 [INFO][5043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.955 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.976 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.977 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:40.977 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:41.043 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:41.044 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:41.672 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:41.678508 containerd[1576]: 2025-09-13 00:15:41.675 [INFO][5043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:41.680469 containerd[1576]: time="2025-09-13T00:15:41.678685626Z" level=info msg="TearDown network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\" successfully" Sep 13 00:15:41.680469 containerd[1576]: time="2025-09-13T00:15:41.678711625Z" level=info msg="StopPodSandbox for \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\" returns successfully" Sep 13 00:15:41.680469 containerd[1576]: time="2025-09-13T00:15:41.679578078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699fc4d87c-lvdzr,Uid:59de3468-bd8b-4b91-bcdd-b230839ec151,Namespace:calico-system,Attempt:1,}" Sep 13 00:15:41.682068 systemd[1]: run-netns-cni\x2d1814c0b1\x2d16a3\x2d7dff\x2d2b14\x2d43ac2c949738.mount: Deactivated successfully. Sep 13 00:15:42.335194 sshd[5018]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:42.340424 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:56546.service: Deactivated successfully. Sep 13 00:15:42.343508 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:15:42.343524 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:15:42.345193 systemd-logind[1564]: Removed session 9. Sep 13 00:15:42.466460 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:15:42.472603 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:15:42.466470 systemd-resolved[1472]: Flushed all caches. Sep 13 00:15:44.111029 containerd[1576]: time="2025-09-13T00:15:44.110928881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:44.114140 containerd[1576]: time="2025-09-13T00:15:44.113816383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:15:44.116591 containerd[1576]: time="2025-09-13T00:15:44.116546901Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:44.120380 containerd[1576]: time="2025-09-13T00:15:44.120283538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:44.121708 containerd[1576]: time="2025-09-13T00:15:44.121539087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 6.037766928s" Sep 13 00:15:44.121708 containerd[1576]: time="2025-09-13T00:15:44.121592489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:15:44.124565 containerd[1576]: time="2025-09-13T00:15:44.124515840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:15:44.126458 containerd[1576]: time="2025-09-13T00:15:44.126414276Z" level=info msg="CreateContainer within sandbox \"14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:15:44.158562 containerd[1576]: time="2025-09-13T00:15:44.158417474Z" level=info msg="CreateContainer within sandbox \"14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef48b8840544c84f4e23af3948007ae2b057b4d3a5533652ce3788445983bcdb\"" Sep 13 00:15:44.161399 containerd[1576]: time="2025-09-13T00:15:44.161351736Z" level=info msg="StartContainer for \"ef48b8840544c84f4e23af3948007ae2b057b4d3a5533652ce3788445983bcdb\"" Sep 13 00:15:44.385589 containerd[1576]: time="2025-09-13T00:15:44.383283578Z" level=info msg="StartContainer for \"ef48b8840544c84f4e23af3948007ae2b057b4d3a5533652ce3788445983bcdb\" returns successfully" Sep 13 00:15:44.395549 systemd-networkd[1251]: cali2e0db35177c: Link UP Sep 13 00:15:44.395844 systemd-networkd[1251]: cali2e0db35177c: Gained carrier Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.126 [INFO][5092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0 calico-kube-controllers-699fc4d87c- calico-system 59de3468-bd8b-4b91-bcdd-b230839ec151 1038 0 2025-09-13 00:15:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:699fc4d87c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-699fc4d87c-lvdzr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2e0db35177c [] [] }} ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.127 [INFO][5092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.178 [INFO][5112] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" HandleID="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.179 [INFO][5112] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" HandleID="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011d910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-699fc4d87c-lvdzr", "timestamp":"2025-09-13 00:15:44.178617249 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.179 [INFO][5112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.179 [INFO][5112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.179 [INFO][5112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.189 [INFO][5112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.196 [INFO][5112] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.206 [INFO][5112] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.211 [INFO][5112] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.215 [INFO][5112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.215 [INFO][5112] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.218 [INFO][5112] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028 Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.226 [INFO][5112] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.384 [INFO][5112] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.384 [INFO][5112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" host="localhost" Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.385 [INFO][5112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:44.651368 containerd[1576]: 2025-09-13 00:15:44.385 [INFO][5112] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" HandleID="k8s-pod-network.253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:44.653069 containerd[1576]: 2025-09-13 00:15:44.390 [INFO][5092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0", GenerateName:"calico-kube-controllers-699fc4d87c-", Namespace:"calico-system", SelfLink:"", UID:"59de3468-bd8b-4b91-bcdd-b230839ec151", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699fc4d87c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-699fc4d87c-lvdzr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e0db35177c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:44.653069 containerd[1576]: 2025-09-13 00:15:44.390 [INFO][5092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:44.653069 containerd[1576]: 2025-09-13 00:15:44.390 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e0db35177c ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:44.653069 containerd[1576]: 2025-09-13 00:15:44.396 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:44.653069 containerd[1576]: 2025-09-13 00:15:44.396 [INFO][5092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0", GenerateName:"calico-kube-controllers-699fc4d87c-", Namespace:"calico-system", SelfLink:"", UID:"59de3468-bd8b-4b91-bcdd-b230839ec151", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699fc4d87c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028", Pod:"calico-kube-controllers-699fc4d87c-lvdzr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e0db35177c", MAC:"9e:8a:6b:45:46:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:44.653069 containerd[1576]: 2025-09-13 00:15:44.642 [INFO][5092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028" Namespace="calico-system" Pod="calico-kube-controllers-699fc4d87c-lvdzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:44.711240 containerd[1576]: time="2025-09-13T00:15:44.711086268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:44.711240 containerd[1576]: time="2025-09-13T00:15:44.711188916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:44.711580 containerd[1576]: time="2025-09-13T00:15:44.711501148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:44.712201 containerd[1576]: time="2025-09-13T00:15:44.711674361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:44.754538 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:44.787648 containerd[1576]: time="2025-09-13T00:15:44.787602208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699fc4d87c-lvdzr,Uid:59de3468-bd8b-4b91-bcdd-b230839ec151,Namespace:calico-system,Attempt:1,} returns sandbox id \"253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028\"" Sep 13 00:15:45.134438 systemd-networkd[1251]: cali99b7c424d0f: Link UP Sep 13 00:15:45.136781 systemd-networkd[1251]: cali99b7c424d0f: Gained carrier Sep 13 00:15:45.152443 kubelet[2715]: I0913 00:15:45.152105 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d5556b5db-vwqmg" podStartSLOduration=30.137267085 podStartE2EDuration="37.152075609s" podCreationTimestamp="2025-09-13 00:15:08 +0000 UTC" firstStartedPulling="2025-09-13 00:15:37.108179292 +0000 UTC m=+44.771107809" lastFinishedPulling="2025-09-13 00:15:44.122987816 +0000 UTC m=+51.785916333" observedRunningTime="2025-09-13 00:15:45.04767646 +0000 UTC m=+52.710604987" watchObservedRunningTime="2025-09-13 00:15:45.152075609 +0000 UTC m=+52.815004126" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.139 [INFO][5078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lxtsh-eth0 csi-node-driver- calico-system aadb0960-6652-43de-9a7a-a3600967a9bb 1028 0 2025-09-13 00:15:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lxtsh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali99b7c424d0f [] [] }} ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.140 [INFO][5078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.195 [INFO][5118] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" HandleID="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.195 [INFO][5118] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" HandleID="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000517080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lxtsh", "timestamp":"2025-09-13 00:15:44.195138489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.195 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.385 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.385 [INFO][5118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.423 [INFO][5118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.658 [INFO][5118] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.682 [INFO][5118] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.691 [INFO][5118] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.695 [INFO][5118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.695 [INFO][5118] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.701 [INFO][5118] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120 Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:44.981 [INFO][5118] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:45.120 [INFO][5118] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:45.120 [INFO][5118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" host="localhost" Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:45.120 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:45.168468 containerd[1576]: 2025-09-13 00:15:45.120 [INFO][5118] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" HandleID="k8s-pod-network.7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:45.174126 containerd[1576]: 2025-09-13 00:15:45.127 [INFO][5078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxtsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aadb0960-6652-43de-9a7a-a3600967a9bb", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lxtsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b7c424d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:45.174126 containerd[1576]: 2025-09-13 00:15:45.127 [INFO][5078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:45.174126 containerd[1576]: 2025-09-13 00:15:45.127 [INFO][5078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99b7c424d0f ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:45.174126 containerd[1576]: 2025-09-13 00:15:45.136 [INFO][5078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:45.174126 containerd[1576]: 2025-09-13 00:15:45.137 [INFO][5078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxtsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aadb0960-6652-43de-9a7a-a3600967a9bb", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120", Pod:"csi-node-driver-lxtsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b7c424d0f", MAC:"7e:3f:65:bd:30:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:45.174126 containerd[1576]: 2025-09-13 00:15:45.159 [INFO][5078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120" Namespace="calico-system" Pod="csi-node-driver-lxtsh" WorkloadEndpoint="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:45.208598 containerd[1576]: time="2025-09-13T00:15:45.205236173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:15:45.208598 containerd[1576]: time="2025-09-13T00:15:45.205363388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:15:45.208598 containerd[1576]: time="2025-09-13T00:15:45.205373908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:45.208598 containerd[1576]: time="2025-09-13T00:15:45.205523276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:15:45.236369 systemd[1]: run-containerd-runc-k8s.io-7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120-runc.9vIcNt.mount: Deactivated successfully. Sep 13 00:15:45.250178 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:15:45.269118 containerd[1576]: time="2025-09-13T00:15:45.268957694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lxtsh,Uid:aadb0960-6652-43de-9a7a-a3600967a9bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120\"" Sep 13 00:15:45.744790 kubelet[2715]: I0913 00:15:45.744741 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:15:46.114744 systemd-networkd[1251]: cali2e0db35177c: Gained IPv6LL Sep 13 00:15:46.819942 systemd-networkd[1251]: cali99b7c424d0f: Gained IPv6LL Sep 13 00:15:47.347782 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:43964.service - OpenSSH per-connection server daemon (10.0.0.1:43964). Sep 13 00:15:47.413225 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 43964 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:15:47.416630 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:47.427245 systemd-logind[1564]: New session 10 of user core. Sep 13 00:15:47.434353 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:15:47.575790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148896907.mount: Deactivated successfully. Sep 13 00:15:47.719013 sshd[5281]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:47.726174 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:43964.service: Deactivated successfully. Sep 13 00:15:47.730917 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:15:47.732755 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:15:47.735188 systemd-logind[1564]: Removed session 10. Sep 13 00:15:49.638475 kubelet[2715]: I0913 00:15:49.638354 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:15:49.745181 containerd[1576]: time="2025-09-13T00:15:49.743082191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:49.747212 containerd[1576]: time="2025-09-13T00:15:49.747129029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:15:49.752865 containerd[1576]: time="2025-09-13T00:15:49.751130740Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:49.770244 containerd[1576]: time="2025-09-13T00:15:49.768579102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:49.770244 containerd[1576]: time="2025-09-13T00:15:49.769465534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.644911051s" Sep 13 00:15:49.770244 containerd[1576]: time="2025-09-13T00:15:49.769500141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:15:49.782011 containerd[1576]: time="2025-09-13T00:15:49.780848865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:15:49.784701 containerd[1576]: time="2025-09-13T00:15:49.784376656Z" level=info msg="CreateContainer within sandbox \"782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:15:49.818309 containerd[1576]: time="2025-09-13T00:15:49.818247843Z" level=info msg="CreateContainer within sandbox \"782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"afe15b60f1600309e7d5c9a413e51a4cc8d531537c6c1cf694074b4b8442f262\"" Sep 13 00:15:49.819793 containerd[1576]: time="2025-09-13T00:15:49.819752612Z" level=info msg="StartContainer for \"afe15b60f1600309e7d5c9a413e51a4cc8d531537c6c1cf694074b4b8442f262\"" Sep 13 00:15:49.935002 containerd[1576]: time="2025-09-13T00:15:49.934738851Z" level=info msg="StartContainer for \"afe15b60f1600309e7d5c9a413e51a4cc8d531537c6c1cf694074b4b8442f262\" returns successfully" Sep 13 00:15:50.214761 containerd[1576]: time="2025-09-13T00:15:50.214463026Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:50.215609 containerd[1576]: time="2025-09-13T00:15:50.215532468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:15:50.218785 containerd[1576]: time="2025-09-13T00:15:50.218742706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 437.844786ms" Sep 13 00:15:50.218863 containerd[1576]: time="2025-09-13T00:15:50.218790317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:15:50.220231 containerd[1576]: time="2025-09-13T00:15:50.220197477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:15:50.222177 containerd[1576]: time="2025-09-13T00:15:50.222142077Z" level=info msg="CreateContainer within sandbox \"47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:15:50.247857 containerd[1576]: time="2025-09-13T00:15:50.247793706Z" level=info msg="CreateContainer within sandbox \"47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3b7db979010f6dfec8a50749c1857b5e1e3ba07490ed2c624f6a64a0fcd3792e\"" Sep 13 00:15:50.249456 containerd[1576]: time="2025-09-13T00:15:50.249072028Z" level=info msg="StartContainer for \"3b7db979010f6dfec8a50749c1857b5e1e3ba07490ed2c624f6a64a0fcd3792e\"" Sep 13 00:15:50.356570 containerd[1576]: time="2025-09-13T00:15:50.356401874Z" level=info msg="StartContainer for \"3b7db979010f6dfec8a50749c1857b5e1e3ba07490ed2c624f6a64a0fcd3792e\" returns successfully" Sep 13 00:15:51.537521 kubelet[2715]: I0913 00:15:51.537307 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d5556b5db-ljn92" podStartSLOduration=31.336259909 podStartE2EDuration="43.537234286s" podCreationTimestamp="2025-09-13 00:15:08 +0000 UTC" firstStartedPulling="2025-09-13 00:15:38.018807763 +0000 UTC m=+45.681736280" lastFinishedPulling="2025-09-13 00:15:50.21978214 +0000 UTC m=+57.882710657" observedRunningTime="2025-09-13 00:15:51.195471867 +0000 UTC m=+58.858400394" watchObservedRunningTime="2025-09-13 00:15:51.537234286 +0000 UTC m=+59.200162813" Sep 13 00:15:51.538596 kubelet[2715]: I0913 00:15:51.537664 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-zxh96" podStartSLOduration=29.116613201 podStartE2EDuration="41.537655944s" podCreationTimestamp="2025-09-13 00:15:10 +0000 UTC" firstStartedPulling="2025-09-13 00:15:37.357561433 +0000 UTC m=+45.020489950" lastFinishedPulling="2025-09-13 00:15:49.778604176 +0000 UTC m=+57.441532693" observedRunningTime="2025-09-13 00:15:51.525001593 +0000 UTC m=+59.187930110" watchObservedRunningTime="2025-09-13 00:15:51.537655944 +0000 UTC m=+59.200584461" Sep 13 00:15:51.815377 systemd[1]: run-containerd-runc-k8s.io-afe15b60f1600309e7d5c9a413e51a4cc8d531537c6c1cf694074b4b8442f262-runc.ExwAUL.mount: Deactivated successfully. Sep 13 00:15:52.434877 containerd[1576]: time="2025-09-13T00:15:52.434826513Z" level=info msg="StopPodSandbox for \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\"" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.527 [WARNING][5497] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10029b49-ba84-4580-85a1-aa34fcb74230", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58", Pod:"coredns-7c65d6cfc9-rbsdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia84207255cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.527 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.527 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" iface="eth0" netns="" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.527 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.527 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.551 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.551 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.551 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.560 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.560 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.562 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:52.569378 containerd[1576]: 2025-09-13 00:15:52.565 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.570006 containerd[1576]: time="2025-09-13T00:15:52.569424778Z" level=info msg="TearDown network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\" successfully" Sep 13 00:15:52.570006 containerd[1576]: time="2025-09-13T00:15:52.569452891Z" level=info msg="StopPodSandbox for \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\" returns successfully" Sep 13 00:15:52.570217 containerd[1576]: time="2025-09-13T00:15:52.570180395Z" level=info msg="RemovePodSandbox for \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\"" Sep 13 00:15:52.572551 containerd[1576]: time="2025-09-13T00:15:52.572526721Z" level=info msg="Forcibly stopping sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\"" Sep 13 00:15:52.730790 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:47110.service - OpenSSH per-connection server daemon (10.0.0.1:47110). Sep 13 00:15:52.791574 kubelet[2715]: I0913 00:15:52.791546 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:15:52.828286 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 47110 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:15:52.831192 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:52.835978 systemd-logind[1564]: New session 11 of user core. Sep 13 00:15:52.842698 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.697 [WARNING][5526] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"10029b49-ba84-4580-85a1-aa34fcb74230", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5c1455adbc25390d51ef77f33c253acb24992f0d7714f9426d8bfe0360f5e58", Pod:"coredns-7c65d6cfc9-rbsdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia84207255cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.697 [INFO][5526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.697 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" iface="eth0" netns="" Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.697 [INFO][5526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.697 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.725 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.725 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.725 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.800 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.800 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" HandleID="k8s-pod-network.c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Workload="localhost-k8s-coredns--7c65d6cfc9--rbsdl-eth0" Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.844 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:52.851520 containerd[1576]: 2025-09-13 00:15:52.848 [INFO][5526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae" Sep 13 00:15:52.851944 containerd[1576]: time="2025-09-13T00:15:52.851562320Z" level=info msg="TearDown network for sandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\" successfully" Sep 13 00:15:52.912061 containerd[1576]: time="2025-09-13T00:15:52.911974786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:52.912247 containerd[1576]: time="2025-09-13T00:15:52.912086140Z" level=info msg="RemovePodSandbox \"c3fa8b8819dcef40a832d3c73d8a9a6c5d8c0893cb6755b71aa65980092947ae\" returns successfully" Sep 13 00:15:52.912892 containerd[1576]: time="2025-09-13T00:15:52.912848901Z" level=info msg="StopPodSandbox for \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\"" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.959 [WARNING][5561] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" WorkloadEndpoint="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.959 [INFO][5561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.959 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" iface="eth0" netns="" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.959 [INFO][5561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.959 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.993 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.993 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:52.993 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:53.007 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:53.007 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:53.010 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:53.019218 containerd[1576]: 2025-09-13 00:15:53.014 [INFO][5561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.024717 containerd[1576]: time="2025-09-13T00:15:53.019275746Z" level=info msg="TearDown network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\" successfully" Sep 13 00:15:53.024717 containerd[1576]: time="2025-09-13T00:15:53.019310112Z" level=info msg="StopPodSandbox for \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\" returns successfully" Sep 13 00:15:53.027412 containerd[1576]: time="2025-09-13T00:15:53.026504409Z" level=info msg="RemovePodSandbox for \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\"" Sep 13 00:15:53.027412 containerd[1576]: time="2025-09-13T00:15:53.026592388Z" level=info msg="Forcibly stopping sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\"" Sep 13 00:15:53.115331 sshd[5541]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:53.127167 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:47118.service - OpenSSH per-connection server daemon (10.0.0.1:47118). Sep 13 00:15:53.128464 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:47110.service: Deactivated successfully. Sep 13 00:15:53.132590 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:15:53.135112 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:15:53.136507 systemd-logind[1564]: Removed session 11. Sep 13 00:15:53.160018 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 47118 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:15:53.162331 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:53.169223 systemd-logind[1564]: New session 12 of user core. Sep 13 00:15:53.175849 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.093 [WARNING][5594] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" WorkloadEndpoint="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.093 [INFO][5594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.093 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" iface="eth0" netns="" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.093 [INFO][5594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.093 [INFO][5594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.126 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.127 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.127 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.172 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.173 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" HandleID="k8s-pod-network.89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Workload="localhost-k8s-whisker--77ff67458c--fc2pm-eth0" Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.175 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:53.185919 containerd[1576]: 2025-09-13 00:15:53.179 [INFO][5594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7" Sep 13 00:15:53.186427 containerd[1576]: time="2025-09-13T00:15:53.185965061Z" level=info msg="TearDown network for sandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\" successfully" Sep 13 00:15:53.205100 containerd[1576]: time="2025-09-13T00:15:53.204999283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:53.205100 containerd[1576]: time="2025-09-13T00:15:53.205105987Z" level=info msg="RemovePodSandbox \"89f461a6236e69b79dc3ece1256f51a87a18842608e550c4d2c627fcf13fafe7\" returns successfully" Sep 13 00:15:53.205854 containerd[1576]: time="2025-09-13T00:15:53.205815515Z" level=info msg="StopPodSandbox for \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\"" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.431 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1571c160-5cc3-4360-89ff-652e3fc95a50", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12", Pod:"calico-apiserver-d5556b5db-vwqmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali125b13fbcff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.432 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.432 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" iface="eth0" netns="" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.432 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.432 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.471 [INFO][5644] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.471 [INFO][5644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.471 [INFO][5644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.549 [WARNING][5644] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.549 [INFO][5644] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.686 [INFO][5644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:53.703681 containerd[1576]: 2025-09-13 00:15:53.698 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.705482 containerd[1576]: time="2025-09-13T00:15:53.703736868Z" level=info msg="TearDown network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\" successfully" Sep 13 00:15:53.705482 containerd[1576]: time="2025-09-13T00:15:53.703774770Z" level=info msg="StopPodSandbox for \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\" returns successfully" Sep 13 00:15:53.706487 containerd[1576]: time="2025-09-13T00:15:53.705726228Z" level=info msg="RemovePodSandbox for \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\"" Sep 13 00:15:53.706487 containerd[1576]: time="2025-09-13T00:15:53.705770222Z" level=info msg="Forcibly stopping sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\"" Sep 13 00:15:53.731537 sshd[5610]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:53.743946 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:47122.service - OpenSSH per-connection server daemon (10.0.0.1:47122). Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.924 [WARNING][5663] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1571c160-5cc3-4360-89ff-652e3fc95a50", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14fafaa971cb17b4040550e8ca504df87deb260387579c94b95d46603e6c4b12", Pod:"calico-apiserver-d5556b5db-vwqmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali125b13fbcff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.924 [INFO][5663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.924 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" iface="eth0" netns="" Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.924 [INFO][5663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.924 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.949 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.949 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.949 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.957 [WARNING][5672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.957 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" HandleID="k8s-pod-network.173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Workload="localhost-k8s-calico--apiserver--d5556b5db--vwqmg-eth0" Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.959 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:53.965942 containerd[1576]: 2025-09-13 00:15:53.962 [INFO][5663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616" Sep 13 00:15:53.965942 containerd[1576]: time="2025-09-13T00:15:53.965889988Z" level=info msg="TearDown network for sandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\" successfully" Sep 13 00:15:54.182457 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:47118.service: Deactivated successfully. Sep 13 00:15:54.192595 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:15:54.195031 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:15:54.196230 systemd-logind[1564]: Removed session 12. Sep 13 00:15:54.215214 containerd[1576]: time="2025-09-13T00:15:54.215132935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:54.215516 containerd[1576]: time="2025-09-13T00:15:54.215236974Z" level=info msg="RemovePodSandbox \"173f927060c908bcbcb34ebacada615d68ee76f88c35b5ae9eba7367e94ae616\" returns successfully" Sep 13 00:15:54.218500 containerd[1576]: time="2025-09-13T00:15:54.216748838Z" level=info msg="StopPodSandbox for \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\"" Sep 13 00:15:54.240347 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 47122 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:15:54.243382 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:54.251381 systemd-logind[1564]: New session 13 of user core. Sep 13 00:15:54.259215 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.262 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0877890-9528-461c-9cba-50fe493297ef", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6", Pod:"calico-apiserver-d5556b5db-ljn92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ee9aa01d69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.263 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.263 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" iface="eth0" netns="" Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.263 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.263 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.290 [INFO][5709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.290 [INFO][5709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.290 [INFO][5709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.297 [WARNING][5709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.297 [INFO][5709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.300 [INFO][5709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:54.311473 containerd[1576]: 2025-09-13 00:15:54.305 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.311473 containerd[1576]: time="2025-09-13T00:15:54.310588737Z" level=info msg="TearDown network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\" successfully" Sep 13 00:15:54.311473 containerd[1576]: time="2025-09-13T00:15:54.310616179Z" level=info msg="StopPodSandbox for \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\" returns successfully" Sep 13 00:15:54.311473 containerd[1576]: time="2025-09-13T00:15:54.311110215Z" level=info msg="RemovePodSandbox for \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\"" Sep 13 00:15:54.311473 containerd[1576]: time="2025-09-13T00:15:54.311140352Z" level=info msg="Forcibly stopping sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\"" Sep 13 00:15:54.436985 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:15:54.434474 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:15:54.434518 systemd-resolved[1472]: Flushed all caches. Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.386 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0", GenerateName:"calico-apiserver-d5556b5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0877890-9528-461c-9cba-50fe493297ef", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5556b5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ecad622405859bf272a3d2b1f652d4f684ad7fc2eeda884998a9ba79d8b4d6", Pod:"calico-apiserver-d5556b5db-ljn92", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ee9aa01d69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.387 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.387 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" iface="eth0" netns="" Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.387 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.387 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.457 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.458 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.458 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.464 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.464 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" HandleID="k8s-pod-network.5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Workload="localhost-k8s-calico--apiserver--d5556b5db--ljn92-eth0" Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.467 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:54.475815 containerd[1576]: 2025-09-13 00:15:54.471 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619" Sep 13 00:15:54.475815 containerd[1576]: time="2025-09-13T00:15:54.475767857Z" level=info msg="TearDown network for sandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\" successfully" Sep 13 00:15:54.693886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159108158.mount: Deactivated successfully. Sep 13 00:15:54.819225 containerd[1576]: time="2025-09-13T00:15:54.817133468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:54.819225 containerd[1576]: time="2025-09-13T00:15:54.817242557Z" level=info msg="RemovePodSandbox \"5efec0081d5536afe794512736d071376071bbef2f71c04695b3da36940fb619\" returns successfully" Sep 13 00:15:54.820992 containerd[1576]: time="2025-09-13T00:15:54.820617847Z" level=info msg="StopPodSandbox for \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\"" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:54.984 [WARNING][5786] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"feb52a36-15cd-4f86-a703-48f9b874433f", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889", Pod:"coredns-7c65d6cfc9-jvqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6b2801c739", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:54.986 [INFO][5786] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:54.986 [INFO][5786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" iface="eth0" netns="" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:54.986 [INFO][5786] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:54.986 [INFO][5786] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:55.021 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:55.021 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:55.021 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:55.035 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:55.035 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:55.036 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:55.043154 containerd[1576]: 2025-09-13 00:15:55.039 [INFO][5786] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.043822 containerd[1576]: time="2025-09-13T00:15:55.043201558Z" level=info msg="TearDown network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\" successfully" Sep 13 00:15:55.043822 containerd[1576]: time="2025-09-13T00:15:55.043227758Z" level=info msg="StopPodSandbox for \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\" returns successfully" Sep 13 00:15:55.043889 containerd[1576]: time="2025-09-13T00:15:55.043861811Z" level=info msg="RemovePodSandbox for \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\"" Sep 13 00:15:55.043924 containerd[1576]: time="2025-09-13T00:15:55.043904493Z" level=info msg="Forcibly stopping sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\"" Sep 13 00:15:55.336555 sshd[5668]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:55.341024 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:47122.service: Deactivated successfully. Sep 13 00:15:55.343510 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:15:55.346059 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:15:55.348393 systemd-logind[1564]: Removed session 13. Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.329 [WARNING][5818] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"feb52a36-15cd-4f86-a703-48f9b874433f", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc5387721e37b67ce3b676c1d6f46ef82da8bf420115c934b26d306035d80889", Pod:"coredns-7c65d6cfc9-jvqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6b2801c739", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.330 [INFO][5818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.330 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" iface="eth0" netns="" Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.330 [INFO][5818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.330 [INFO][5818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.359 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.360 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.360 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.366 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.366 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" HandleID="k8s-pod-network.4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Workload="localhost-k8s-coredns--7c65d6cfc9--jvqvw-eth0" Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.368 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:55.375428 containerd[1576]: 2025-09-13 00:15:55.372 [INFO][5818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7" Sep 13 00:15:55.375428 containerd[1576]: time="2025-09-13T00:15:55.375302923Z" level=info msg="TearDown network for sandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\" successfully" Sep 13 00:15:56.054373 containerd[1576]: time="2025-09-13T00:15:56.054224591Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:56.054373 containerd[1576]: time="2025-09-13T00:15:56.054356292Z" level=info msg="RemovePodSandbox \"4f8b9c82752a76cfdeba1c23ea37ba2ffb4836368e4530e9ebfb326b35f592c7\" returns successfully" Sep 13 00:15:56.054999 containerd[1576]: time="2025-09-13T00:15:56.054943214Z" level=info msg="StopPodSandbox for \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\"" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.101 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0", GenerateName:"calico-kube-controllers-699fc4d87c-", Namespace:"calico-system", SelfLink:"", UID:"59de3468-bd8b-4b91-bcdd-b230839ec151", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699fc4d87c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028", Pod:"calico-kube-controllers-699fc4d87c-lvdzr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e0db35177c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.101 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.101 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" iface="eth0" netns="" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.102 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.102 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.133 [INFO][5860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.133 [INFO][5860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.133 [INFO][5860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.138 [WARNING][5860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.138 [INFO][5860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.140 [INFO][5860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:56.147441 containerd[1576]: 2025-09-13 00:15:56.143 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.150975 containerd[1576]: time="2025-09-13T00:15:56.147475427Z" level=info msg="TearDown network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\" successfully" Sep 13 00:15:56.150975 containerd[1576]: time="2025-09-13T00:15:56.147505875Z" level=info msg="StopPodSandbox for \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\" returns successfully" Sep 13 00:15:56.150975 containerd[1576]: time="2025-09-13T00:15:56.148186025Z" level=info msg="RemovePodSandbox for \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\"" Sep 13 00:15:56.150975 containerd[1576]: time="2025-09-13T00:15:56.148225762Z" level=info msg="Forcibly stopping sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\"" Sep 13 00:15:56.150975 containerd[1576]: time="2025-09-13T00:15:56.149636649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:56.153773 containerd[1576]: time="2025-09-13T00:15:56.153703996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:15:56.158286 containerd[1576]: time="2025-09-13T00:15:56.157978539Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:56.163990 containerd[1576]: time="2025-09-13T00:15:56.163935840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:56.182771 containerd[1576]: time="2025-09-13T00:15:56.182466661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.962232253s" Sep 13 00:15:56.182771 containerd[1576]: time="2025-09-13T00:15:56.182510896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:15:56.184159 containerd[1576]: time="2025-09-13T00:15:56.184103490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:15:56.190512 containerd[1576]: time="2025-09-13T00:15:56.189859386Z" level=info msg="CreateContainer within sandbox \"29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:15:56.211264 containerd[1576]: time="2025-09-13T00:15:56.210931916Z" level=info msg="CreateContainer within sandbox \"29f8f8674d8b3931217f74ace484053c9df4ade2837dbedcc6136eadb1d3729b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2fb31ca16ef40d8c4cd63b0ece8dc6bb081a9761b609bf819fd70f4932dfbdbb\"" Sep 13 00:15:56.211799 containerd[1576]: time="2025-09-13T00:15:56.211662082Z" level=info msg="StartContainer for \"2fb31ca16ef40d8c4cd63b0ece8dc6bb081a9761b609bf819fd70f4932dfbdbb\"" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.209 [WARNING][5878] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0", GenerateName:"calico-kube-controllers-699fc4d87c-", Namespace:"calico-system", SelfLink:"", UID:"59de3468-bd8b-4b91-bcdd-b230839ec151", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699fc4d87c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028", Pod:"calico-kube-controllers-699fc4d87c-lvdzr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e0db35177c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.209 [INFO][5878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.209 [INFO][5878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" iface="eth0" netns="" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.209 [INFO][5878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.210 [INFO][5878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.255 [INFO][5886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.255 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.255 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.272 [WARNING][5886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.273 [INFO][5886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" HandleID="k8s-pod-network.6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Workload="localhost-k8s-calico--kube--controllers--699fc4d87c--lvdzr-eth0" Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.278 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:56.294644 containerd[1576]: 2025-09-13 00:15:56.290 [INFO][5878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929" Sep 13 00:15:56.295793 containerd[1576]: time="2025-09-13T00:15:56.295757124Z" level=info msg="TearDown network for sandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\" successfully" Sep 13 00:15:56.308143 containerd[1576]: time="2025-09-13T00:15:56.307376229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:56.308143 containerd[1576]: time="2025-09-13T00:15:56.307475679Z" level=info msg="RemovePodSandbox \"6dc48e09c34366dea7ae3a788ed10c7b65d1a1944cf7a884fcc8a88336798929\" returns successfully" Sep 13 00:15:56.308143 containerd[1576]: time="2025-09-13T00:15:56.308114070Z" level=info msg="StopPodSandbox for \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\"" Sep 13 00:15:56.382978 containerd[1576]: time="2025-09-13T00:15:56.382816041Z" level=info msg="StartContainer for \"2fb31ca16ef40d8c4cd63b0ece8dc6bb081a9761b609bf819fd70f4932dfbdbb\" returns successfully" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.360 [WARNING][5929] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--zxh96-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f7af872b-9925-42a7-9b22-c2f24c9b6443", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c", Pod:"goldmane-7988f88666-zxh96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba1177c4dce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.360 [INFO][5929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.360 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" iface="eth0" netns="" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.360 [INFO][5929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.360 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.383 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.383 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.384 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.390 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.390 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.392 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:56.401623 containerd[1576]: 2025-09-13 00:15:56.398 [INFO][5929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.401623 containerd[1576]: time="2025-09-13T00:15:56.401445331Z" level=info msg="TearDown network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\" successfully" Sep 13 00:15:56.401623 containerd[1576]: time="2025-09-13T00:15:56.401478464Z" level=info msg="StopPodSandbox for \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\" returns successfully" Sep 13 00:15:56.402151 containerd[1576]: time="2025-09-13T00:15:56.402046781Z" level=info msg="RemovePodSandbox for \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\"" Sep 13 00:15:56.402151 containerd[1576]: time="2025-09-13T00:15:56.402099662Z" level=info msg="Forcibly stopping sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\"" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.439 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--zxh96-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f7af872b-9925-42a7-9b22-c2f24c9b6443", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"782daed6b163db5ae681c0bbd4915b4462bf78c1ab88616498c3eabc952d419c", Pod:"goldmane-7988f88666-zxh96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba1177c4dce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.440 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.440 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" iface="eth0" netns="" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.440 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.440 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.464 [INFO][5974] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.464 [INFO][5974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.464 [INFO][5974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.471 [WARNING][5974] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.471 [INFO][5974] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" HandleID="k8s-pod-network.581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Workload="localhost-k8s-goldmane--7988f88666--zxh96-eth0" Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.473 [INFO][5974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:56.479472 containerd[1576]: 2025-09-13 00:15:56.476 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7" Sep 13 00:15:56.516449 containerd[1576]: time="2025-09-13T00:15:56.479527935Z" level=info msg="TearDown network for sandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\" successfully" Sep 13 00:15:56.710813 containerd[1576]: time="2025-09-13T00:15:56.710737881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:56.711029 containerd[1576]: time="2025-09-13T00:15:56.710830849Z" level=info msg="RemovePodSandbox \"581f0d050110224b21296422970ca00e44b97f8145945ed080bd3509b7246bd7\" returns successfully" Sep 13 00:15:56.711448 containerd[1576]: time="2025-09-13T00:15:56.711418052Z" level=info msg="StopPodSandbox for \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\"" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.826 [WARNING][5993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxtsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aadb0960-6652-43de-9a7a-a3600967a9bb", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120", Pod:"csi-node-driver-lxtsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b7c424d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.827 [INFO][5993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.827 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" iface="eth0" netns="" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.827 [INFO][5993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.827 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.858 [INFO][6005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.858 [INFO][6005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.858 [INFO][6005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.945 [WARNING][6005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.945 [INFO][6005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.949 [INFO][6005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:56.959472 containerd[1576]: 2025-09-13 00:15:56.955 [INFO][5993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:56.960146 containerd[1576]: time="2025-09-13T00:15:56.959541296Z" level=info msg="TearDown network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\" successfully" Sep 13 00:15:56.960146 containerd[1576]: time="2025-09-13T00:15:56.959574900Z" level=info msg="StopPodSandbox for \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\" returns successfully" Sep 13 00:15:56.960976 containerd[1576]: time="2025-09-13T00:15:56.960492976Z" level=info msg="RemovePodSandbox for \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\"" Sep 13 00:15:56.960976 containerd[1576]: time="2025-09-13T00:15:56.960536058Z" level=info msg="Forcibly stopping sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\"" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.005 [WARNING][6027] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lxtsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aadb0960-6652-43de-9a7a-a3600967a9bb", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 15, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120", Pod:"csi-node-driver-lxtsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b7c424d0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.006 [INFO][6027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.006 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" iface="eth0" netns="" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.006 [INFO][6027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.006 [INFO][6027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.036 [INFO][6035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.036 [INFO][6035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.036 [INFO][6035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.051 [WARNING][6035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.051 [INFO][6035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" HandleID="k8s-pod-network.86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Workload="localhost-k8s-csi--node--driver--lxtsh-eth0" Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.054 [INFO][6035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:15:57.062421 containerd[1576]: 2025-09-13 00:15:57.058 [INFO][6027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f" Sep 13 00:15:57.063490 containerd[1576]: time="2025-09-13T00:15:57.062473967Z" level=info msg="TearDown network for sandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\" successfully" Sep 13 00:15:57.165077 containerd[1576]: time="2025-09-13T00:15:57.164987968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:15:57.165077 containerd[1576]: time="2025-09-13T00:15:57.165090855Z" level=info msg="RemovePodSandbox \"86ec52aa68a314bd6fbff0624391219ce0f5a0fdfae77fd406763fd07bd0477f\" returns successfully" Sep 13 00:16:00.088473 containerd[1576]: time="2025-09-13T00:16:00.088362174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:00.116133 containerd[1576]: time="2025-09-13T00:16:00.116054973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:16:00.141625 containerd[1576]: time="2025-09-13T00:16:00.141515893Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:00.159151 containerd[1576]: time="2025-09-13T00:16:00.159079622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:00.159843 containerd[1576]: time="2025-09-13T00:16:00.159724252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.975586426s" Sep 13 00:16:00.159843 containerd[1576]: time="2025-09-13T00:16:00.159776792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:16:00.161248 containerd[1576]: time="2025-09-13T00:16:00.161211400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:16:00.179549 containerd[1576]: time="2025-09-13T00:16:00.179492949Z" level=info msg="CreateContainer within sandbox \"253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:16:00.198610 containerd[1576]: time="2025-09-13T00:16:00.198522214Z" level=info msg="CreateContainer within sandbox \"253d98d4bc876238feccca025337ebb49556ab88d6a2155e20119312623d7028\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bb77b462541d3785facca25d61a0a84ffc6253a4584f833ae44b5d3c8cf6ca35\"" Sep 13 00:16:00.200277 containerd[1576]: time="2025-09-13T00:16:00.199395621Z" level=info msg="StartContainer for \"bb77b462541d3785facca25d61a0a84ffc6253a4584f833ae44b5d3c8cf6ca35\"" Sep 13 00:16:00.349630 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:48468.service - OpenSSH per-connection server daemon (10.0.0.1:48468). Sep 13 00:16:00.604383 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 48468 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:00.606284 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:00.610614 systemd-logind[1564]: New session 14 of user core. Sep 13 00:16:00.621592 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:16:00.717623 containerd[1576]: time="2025-09-13T00:16:00.717560403Z" level=info msg="StartContainer for \"bb77b462541d3785facca25d61a0a84ffc6253a4584f833ae44b5d3c8cf6ca35\" returns successfully" Sep 13 00:16:01.004354 kubelet[2715]: I0913 00:16:01.002768 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-699fc4d87c-lvdzr" podStartSLOduration=34.630828264 podStartE2EDuration="50.002744019s" podCreationTimestamp="2025-09-13 00:15:11 +0000 UTC" firstStartedPulling="2025-09-13 00:15:44.789021171 +0000 UTC m=+52.451949688" lastFinishedPulling="2025-09-13 00:16:00.160936926 +0000 UTC m=+67.823865443" observedRunningTime="2025-09-13 00:16:01.002678003 +0000 UTC m=+68.665606520" watchObservedRunningTime="2025-09-13 00:16:01.002744019 +0000 UTC m=+68.665672536" Sep 13 00:16:01.004354 kubelet[2715]: I0913 00:16:01.002984 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-9794c5674-qhgq9" podStartSLOduration=5.809968443 podStartE2EDuration="26.002979458s" podCreationTimestamp="2025-09-13 00:15:35 +0000 UTC" firstStartedPulling="2025-09-13 00:15:35.990687411 +0000 UTC m=+43.653615928" lastFinishedPulling="2025-09-13 00:15:56.183698415 +0000 UTC m=+63.846626943" observedRunningTime="2025-09-13 00:15:56.938411235 +0000 UTC m=+64.601339783" watchObservedRunningTime="2025-09-13 00:16:01.002979458 +0000 UTC m=+68.665907975" Sep 13 00:16:01.233757 sshd[6093]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:01.238828 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:48468.service: Deactivated successfully. Sep 13 00:16:01.242170 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:16:01.242181 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:16:01.243635 systemd-logind[1564]: Removed session 14. Sep 13 00:16:01.453722 kubelet[2715]: E0913 00:16:01.453680 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:01.896693 systemd[1]: run-containerd-runc-k8s.io-bb77b462541d3785facca25d61a0a84ffc6253a4584f833ae44b5d3c8cf6ca35-runc.GcWXAQ.mount: Deactivated successfully. Sep 13 00:16:02.434569 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:16:02.435913 systemd-resolved[1472]: Flushed all caches. Sep 13 00:16:02.438371 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:16:03.149334 containerd[1576]: time="2025-09-13T00:16:03.149225809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:03.150346 containerd[1576]: time="2025-09-13T00:16:03.150276942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:16:03.153083 containerd[1576]: time="2025-09-13T00:16:03.152998179Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:03.155642 containerd[1576]: time="2025-09-13T00:16:03.155541186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:03.156277 containerd[1576]: time="2025-09-13T00:16:03.156227284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.994970817s" Sep 13 00:16:03.156277 containerd[1576]: time="2025-09-13T00:16:03.156272491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:16:03.159566 containerd[1576]: time="2025-09-13T00:16:03.159509169Z" level=info msg="CreateContainer within sandbox \"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:16:03.179035 containerd[1576]: time="2025-09-13T00:16:03.178967366Z" level=info msg="CreateContainer within sandbox \"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9228866668183e32ef4be05c551057cd89f9d04266d14fbba0e857470717a3c1\"" Sep 13 00:16:03.179752 containerd[1576]: time="2025-09-13T00:16:03.179713308Z" level=info msg="StartContainer for \"9228866668183e32ef4be05c551057cd89f9d04266d14fbba0e857470717a3c1\"" Sep 13 00:16:03.221770 systemd[1]: run-containerd-runc-k8s.io-9228866668183e32ef4be05c551057cd89f9d04266d14fbba0e857470717a3c1-runc.9xoWSt.mount: Deactivated successfully. Sep 13 00:16:03.258901 containerd[1576]: time="2025-09-13T00:16:03.258833822Z" level=info msg="StartContainer for \"9228866668183e32ef4be05c551057cd89f9d04266d14fbba0e857470717a3c1\" returns successfully" Sep 13 00:16:03.260602 containerd[1576]: time="2025-09-13T00:16:03.260551105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:16:06.021508 containerd[1576]: time="2025-09-13T00:16:06.021413634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:06.023381 containerd[1576]: time="2025-09-13T00:16:06.023229081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:16:06.025098 containerd[1576]: time="2025-09-13T00:16:06.025003129Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:06.027652 containerd[1576]: time="2025-09-13T00:16:06.027587258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:06.028711 containerd[1576]: time="2025-09-13T00:16:06.028627509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.768032049s" Sep 13 00:16:06.028711 containerd[1576]: time="2025-09-13T00:16:06.028683425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:16:06.031769 containerd[1576]: time="2025-09-13T00:16:06.031715377Z" level=info msg="CreateContainer within sandbox \"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:16:06.069659 containerd[1576]: time="2025-09-13T00:16:06.069583701Z" level=info msg="CreateContainer within sandbox \"7ea95ba30eed1ee75735b7bd8138e9519cfcece3496e669796e78c5b78835120\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"37bd14bb7a7a4b66b4b29f9509001ac8474c834322968a53886b4510279d9e15\"" Sep 13 00:16:06.070395 containerd[1576]: time="2025-09-13T00:16:06.070307750Z" level=info msg="StartContainer for \"37bd14bb7a7a4b66b4b29f9509001ac8474c834322968a53886b4510279d9e15\"" Sep 13 00:16:06.176032 containerd[1576]: time="2025-09-13T00:16:06.175902571Z" level=info msg="StartContainer for \"37bd14bb7a7a4b66b4b29f9509001ac8474c834322968a53886b4510279d9e15\" returns successfully" Sep 13 00:16:06.248333 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:48474.service - OpenSSH per-connection server daemon (10.0.0.1:48474). Sep 13 00:16:06.309992 sshd[6236]: Accepted publickey for core from 10.0.0.1 port 48474 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:06.312360 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:06.319933 systemd-logind[1564]: New session 15 of user core. Sep 13 00:16:06.329798 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:16:06.816181 kubelet[2715]: I0913 00:16:06.816093 2715 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:16:06.822379 kubelet[2715]: I0913 00:16:06.822340 2715 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:16:06.886670 kubelet[2715]: I0913 00:16:06.885599 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lxtsh" podStartSLOduration=35.12577442 podStartE2EDuration="55.885575878s" podCreationTimestamp="2025-09-13 00:15:11 +0000 UTC" firstStartedPulling="2025-09-13 00:15:45.270293685 +0000 UTC m=+52.933222202" lastFinishedPulling="2025-09-13 00:16:06.030095143 +0000 UTC m=+73.693023660" observedRunningTime="2025-09-13 00:16:06.884364723 +0000 UTC m=+74.547293250" watchObservedRunningTime="2025-09-13 00:16:06.885575878 +0000 UTC m=+74.548504396" Sep 13 00:16:06.902330 sshd[6236]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:06.908481 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:48474.service: Deactivated successfully. Sep 13 00:16:06.913211 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:16:06.914476 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:16:06.915971 systemd-logind[1564]: Removed session 15. Sep 13 00:16:08.450446 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:16:08.450485 systemd-resolved[1472]: Flushed all caches. Sep 13 00:16:08.452352 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:16:11.923906 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:41356.service - OpenSSH per-connection server daemon (10.0.0.1:41356). Sep 13 00:16:11.966565 sshd[6276]: Accepted publickey for core from 10.0.0.1 port 41356 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:11.968869 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:11.974092 systemd-logind[1564]: New session 16 of user core. Sep 13 00:16:11.979614 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:16:12.277442 sshd[6276]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:12.286021 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:41356.service: Deactivated successfully. Sep 13 00:16:12.289763 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:16:12.291991 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:16:12.293443 systemd-logind[1564]: Removed session 16. Sep 13 00:16:12.484067 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:16:12.488477 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:16:12.484076 systemd-resolved[1472]: Flushed all caches. Sep 13 00:16:13.453919 kubelet[2715]: E0913 00:16:13.453813 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:14.530864 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:16:14.530874 systemd-resolved[1472]: Flushed all caches. Sep 13 00:16:14.533355 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:16:16.454704 kubelet[2715]: E0913 00:16:16.454665 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:17.284670 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:41368.service - OpenSSH per-connection server daemon (10.0.0.1:41368). Sep 13 00:16:17.334628 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 41368 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:17.337019 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:17.342971 systemd-logind[1564]: New session 17 of user core. Sep 13 00:16:17.348670 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:16:17.843264 sshd[6298]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:17.851761 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:41370.service - OpenSSH per-connection server daemon (10.0.0.1:41370). Sep 13 00:16:17.852864 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:41368.service: Deactivated successfully. Sep 13 00:16:17.857439 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:16:17.858283 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:16:17.867723 systemd-logind[1564]: Removed session 17. Sep 13 00:16:17.902545 sshd[6311]: Accepted publickey for core from 10.0.0.1 port 41370 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:17.904793 sshd[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:17.910300 systemd-logind[1564]: New session 18 of user core. Sep 13 00:16:17.917671 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:16:18.511868 sshd[6311]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:18.521836 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:41382.service - OpenSSH per-connection server daemon (10.0.0.1:41382). Sep 13 00:16:18.523032 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:41370.service: Deactivated successfully. Sep 13 00:16:18.528977 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:16:18.536875 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:16:18.538545 systemd-logind[1564]: Removed session 18. Sep 13 00:16:18.582996 sshd[6325]: Accepted publickey for core from 10.0.0.1 port 41382 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:18.585162 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:18.591239 systemd-logind[1564]: New session 19 of user core. Sep 13 00:16:18.604804 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:16:21.040013 sshd[6325]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:21.053446 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:59918.service - OpenSSH per-connection server daemon (10.0.0.1:59918). Sep 13 00:16:21.062538 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:41382.service: Deactivated successfully. Sep 13 00:16:21.069073 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:16:21.069785 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:16:21.074100 systemd-logind[1564]: Removed session 19. Sep 13 00:16:21.125187 sshd[6372]: Accepted publickey for core from 10.0.0.1 port 59918 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:21.129580 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:21.140862 systemd-logind[1564]: New session 20 of user core. Sep 13 00:16:21.148062 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:16:22.253619 sshd[6372]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:22.270518 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:59924.service - OpenSSH per-connection server daemon (10.0.0.1:59924). Sep 13 00:16:22.278873 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:59918.service: Deactivated successfully. Sep 13 00:16:22.289833 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:16:22.311409 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:16:22.316687 systemd-logind[1564]: Removed session 20. Sep 13 00:16:22.372585 sshd[6388]: Accepted publickey for core from 10.0.0.1 port 59924 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:22.374846 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:22.381589 systemd-logind[1564]: New session 21 of user core. Sep 13 00:16:22.389749 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:16:22.466575 systemd-resolved[1472]: Under memory pressure, flushing caches. Sep 13 00:16:22.466771 systemd-resolved[1472]: Flushed all caches. Sep 13 00:16:22.468831 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 13 00:16:22.550606 sshd[6388]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:22.557067 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:59924.service: Deactivated successfully. Sep 13 00:16:22.562352 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:16:22.563768 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:16:22.564917 systemd-logind[1564]: Removed session 21. Sep 13 00:16:27.564806 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:59930.service - OpenSSH per-connection server daemon (10.0.0.1:59930). Sep 13 00:16:27.601974 sshd[6454]: Accepted publickey for core from 10.0.0.1 port 59930 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:27.604113 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:27.609394 systemd-logind[1564]: New session 22 of user core. Sep 13 00:16:27.618588 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:16:27.818722 sshd[6454]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:27.825818 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:59930.service: Deactivated successfully. Sep 13 00:16:27.833567 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:16:27.835097 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:16:27.836425 systemd-logind[1564]: Removed session 22. Sep 13 00:16:32.843573 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:58102.service - OpenSSH per-connection server daemon (10.0.0.1:58102). Sep 13 00:16:32.881754 sshd[6475]: Accepted publickey for core from 10.0.0.1 port 58102 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:32.884280 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:32.890633 systemd-logind[1564]: New session 23 of user core. Sep 13 00:16:32.897904 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:16:33.025153 sshd[6475]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:33.031240 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:58102.service: Deactivated successfully. Sep 13 00:16:33.031735 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:16:33.036012 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:16:33.037205 systemd-logind[1564]: Removed session 23. Sep 13 00:16:36.454042 kubelet[2715]: E0913 00:16:36.453973 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:38.034886 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:58112.service - OpenSSH per-connection server daemon (10.0.0.1:58112). Sep 13 00:16:38.068778 sshd[6491]: Accepted publickey for core from 10.0.0.1 port 58112 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:38.072110 sshd[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:38.078561 systemd-logind[1564]: New session 24 of user core. Sep 13 00:16:38.082798 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:16:38.216839 sshd[6491]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:38.222823 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:16:38.223467 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:58112.service: Deactivated successfully. Sep 13 00:16:38.227467 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:16:38.230019 systemd-logind[1564]: Removed session 24. Sep 13 00:16:42.454262 kubelet[2715]: E0913 00:16:42.454216 2715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:43.235895 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:35632.service - OpenSSH per-connection server daemon (10.0.0.1:35632). Sep 13 00:16:43.272298 sshd[6527]: Accepted publickey for core from 10.0.0.1 port 35632 ssh2: RSA SHA256:LFJx1p1T/X2ZG6eRvpjPibrSuxN2W+3RxLha39sy4q0 Sep 13 00:16:43.274593 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:43.279410 systemd-logind[1564]: New session 25 of user core. Sep 13 00:16:43.285648 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:16:43.419071 sshd[6527]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:43.424716 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:35632.service: Deactivated successfully. Sep 13 00:16:43.429503 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:16:43.430995 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:16:43.432349 systemd-logind[1564]: Removed session 25.