Sep 10 00:48:01.944222 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 22:56:44 -00 2025 Sep 10 00:48:01.944266 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:48:01.944282 kernel: BIOS-provided physical RAM map: Sep 10 00:48:01.944292 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 00:48:01.944301 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 00:48:01.944310 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 00:48:01.944322 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 00:48:01.944332 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 00:48:01.944342 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 10 00:48:01.944366 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 10 00:48:01.944389 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 10 00:48:01.944399 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 10 00:48:01.944414 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 10 00:48:01.944424 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 10 00:48:01.944447 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 10 00:48:01.944476 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 00:48:01.944492 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 10 00:48:01.944509 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 10 00:48:01.944538 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 00:48:01.944549 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:48:01.944559 kernel: NX (Execute Disable) protection: active Sep 10 00:48:01.944570 kernel: APIC: Static calls initialized Sep 10 00:48:01.944580 kernel: efi: EFI v2.7 by EDK II Sep 10 00:48:01.944591 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 10 00:48:01.944602 kernel: SMBIOS 2.8 present. Sep 10 00:48:01.944612 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 10 00:48:01.944628 kernel: Hypervisor detected: KVM Sep 10 00:48:01.944643 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:48:01.944654 kernel: kvm-clock: using sched offset of 6097345401 cycles Sep 10 00:48:01.944665 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:48:01.944676 kernel: tsc: Detected 2794.748 MHz processor Sep 10 00:48:01.944687 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:48:01.944699 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:48:01.944710 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 10 00:48:01.944721 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 10 00:48:01.944732 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:48:01.944746 kernel: Using GB pages for direct mapping Sep 10 00:48:01.944756 kernel: Secure boot disabled Sep 10 00:48:01.944767 kernel: ACPI: Early table checksum verification disabled Sep 10 00:48:01.944779 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 10 00:48:01.944795 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:48:01.944807 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:48:01.944818 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:48:01.944833 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 10 00:48:01.944845 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:48:01.944861 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:48:01.944872 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:48:01.944884 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:48:01.944895 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 00:48:01.944907 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 10 00:48:01.944921 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 10 00:48:01.944933 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 10 00:48:01.944944 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 10 00:48:01.944954 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 10 00:48:01.944964 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 10 00:48:01.944976 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 10 00:48:01.944987 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 10 00:48:01.944999 kernel: No NUMA configuration found Sep 10 00:48:01.945013 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 10 00:48:01.945029 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 10 00:48:01.945040 kernel: Zone ranges: Sep 10 00:48:01.945052 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:48:01.945064 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 10 00:48:01.945075 kernel: Normal empty Sep 10 00:48:01.945087 kernel: Movable zone start for each node Sep 10 00:48:01.945098 kernel: Early memory node ranges Sep 10 00:48:01.945110 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 10 00:48:01.945121 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 10 00:48:01.945132 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 10 00:48:01.945147 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 10 00:48:01.945158 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 10 00:48:01.945170 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 10 00:48:01.945181 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 10 00:48:01.945193 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:48:01.945204 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 10 00:48:01.945216 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 10 00:48:01.945227 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:48:01.945238 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 10 00:48:01.945304 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 10 00:48:01.945316 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 10 00:48:01.945327 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:48:01.945339 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:48:01.945350 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:48:01.945362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:48:01.945382 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:48:01.945393 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:48:01.945405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:48:01.945420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:48:01.945431 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:48:01.945442 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:48:01.945454 kernel: TSC deadline timer available Sep 10 00:48:01.945465 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:48:01.945477 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 00:48:01.945499 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:48:01.945512 kernel: kvm-guest: setup PV sched yield Sep 10 00:48:01.945524 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 10 00:48:01.945540 kernel: Booting paravirtualized kernel on KVM Sep 10 00:48:01.945552 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:48:01.945564 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:48:01.945575 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 10 00:48:01.945587 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 10 00:48:01.945598 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:48:01.945609 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:48:01.945621 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:48:01.945634 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:48:01.945653 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:48:01.945664 kernel: random: crng init done Sep 10 00:48:01.945676 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:48:01.945688 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:48:01.945699 kernel: Fallback order for Node 0: 0 Sep 10 00:48:01.945711 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 10 00:48:01.945722 kernel: Policy zone: DMA32 Sep 10 00:48:01.945734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:48:01.945749 kernel: Memory: 2400596K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 166144K reserved, 0K cma-reserved) Sep 10 00:48:01.945761 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:48:01.945772 kernel: ftrace: allocating 37969 entries in 149 pages Sep 10 00:48:01.945784 kernel: ftrace: allocated 149 pages with 4 groups Sep 10 00:48:01.945796 kernel: Dynamic Preempt: voluntary Sep 10 00:48:01.945818 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:48:01.945834 kernel: rcu: RCU event tracing is enabled. Sep 10 00:48:01.945846 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:48:01.945859 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:48:01.945871 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:48:01.945883 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:48:01.945895 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:48:01.945911 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:48:01.945932 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:48:01.945957 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:48:01.945970 kernel: Console: colour dummy device 80x25 Sep 10 00:48:01.945982 kernel: printk: console [ttyS0] enabled Sep 10 00:48:01.945999 kernel: ACPI: Core revision 20230628 Sep 10 00:48:01.946012 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:48:01.946030 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:48:01.946042 kernel: x2apic enabled Sep 10 00:48:01.946054 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 00:48:01.946067 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 00:48:01.946079 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 00:48:01.946091 kernel: kvm-guest: setup PV IPIs Sep 10 00:48:01.946103 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:48:01.946119 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:48:01.946132 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 00:48:01.946144 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:48:01.946156 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:48:01.946168 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:48:01.946180 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:48:01.946192 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:48:01.946205 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:48:01.946217 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:48:01.946232 kernel: active return thunk: retbleed_return_thunk Sep 10 00:48:01.946259 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:48:01.946272 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:48:01.946284 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 00:48:01.946300 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 00:48:01.946313 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 00:48:01.946325 kernel: active return thunk: srso_return_thunk Sep 10 00:48:01.946338 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 00:48:01.946354 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:48:01.946366 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:48:01.946386 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:48:01.946398 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:48:01.946411 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 00:48:01.946423 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:48:01.946435 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:48:01.946447 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:48:01.946460 kernel: landlock: Up and running. Sep 10 00:48:01.946475 kernel: SELinux: Initializing. Sep 10 00:48:01.946487 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:48:01.946500 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:48:01.946512 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:48:01.946524 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:48:01.946537 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:48:01.946549 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:48:01.946561 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:48:01.946573 kernel: ... version: 0 Sep 10 00:48:01.946588 kernel: ... bit width: 48 Sep 10 00:48:01.946600 kernel: ... generic registers: 6 Sep 10 00:48:01.946613 kernel: ... value mask: 0000ffffffffffff Sep 10 00:48:01.946625 kernel: ... max period: 00007fffffffffff Sep 10 00:48:01.946637 kernel: ... fixed-purpose events: 0 Sep 10 00:48:01.946649 kernel: ... event mask: 000000000000003f Sep 10 00:48:01.946661 kernel: signal: max sigframe size: 1776 Sep 10 00:48:01.946673 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:48:01.946686 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:48:01.946701 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:48:01.946713 kernel: smpboot: x86: Booting SMP configuration: Sep 10 00:48:01.946725 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 00:48:01.946737 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:48:01.946749 kernel: smpboot: Max logical packages: 1 Sep 10 00:48:01.946761 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 00:48:01.946773 kernel: devtmpfs: initialized Sep 10 00:48:01.946785 kernel: x86/mm: Memory block size: 128MB Sep 10 00:48:01.946798 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 10 00:48:01.946810 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 10 00:48:01.946826 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 10 00:48:01.946838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 10 00:48:01.946850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 10 00:48:01.946863 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:48:01.946875 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:48:01.946887 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:48:01.946899 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:48:01.946911 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:48:01.946927 kernel: audit: type=2000 audit(1757465280.888:1): state=initialized audit_enabled=0 res=1 Sep 10 00:48:01.946939 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:48:01.946950 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:48:01.946961 kernel: cpuidle: using governor menu Sep 10 00:48:01.946973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:48:01.946985 kernel: dca service started, version 1.12.1 Sep 10 00:48:01.946997 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:48:01.947010 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 10 00:48:01.947022 kernel: PCI: Using configuration type 1 for base access Sep 10 00:48:01.947038 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:48:01.947050 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:48:01.947062 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:48:01.947074 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:48:01.947087 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:48:01.947099 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:48:01.947111 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:48:01.947123 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:48:01.947136 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:48:01.947151 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 10 00:48:01.947163 kernel: ACPI: Interpreter enabled Sep 10 00:48:01.947175 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:48:01.947188 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:48:01.947200 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:48:01.947212 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 00:48:01.947224 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:48:01.947236 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:48:01.947513 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:48:01.947699 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:48:01.947868 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:48:01.947884 kernel: PCI host bridge to bus 0000:00 Sep 10 00:48:01.948062 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:48:01.948208 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:48:01.948402 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:48:01.948563 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:48:01.948726 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:48:01.948887 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 10 00:48:01.949046 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:48:01.949304 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:48:01.949518 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:48:01.949696 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 10 00:48:01.949860 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 10 00:48:01.950018 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 10 00:48:01.950170 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 10 00:48:01.950355 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:48:01.950556 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:48:01.950741 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 10 00:48:01.950914 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 10 00:48:01.951078 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 10 00:48:01.951276 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:48:01.951449 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 10 00:48:01.951594 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 10 00:48:01.951724 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 10 00:48:01.951860 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:48:01.952010 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 10 00:48:01.952143 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 10 00:48:01.952303 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 10 00:48:01.952447 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 10 00:48:01.952589 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:48:01.952718 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:48:01.952855 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:48:01.953001 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 10 00:48:01.953133 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 10 00:48:01.953340 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:48:01.953483 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 10 00:48:01.953494 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:48:01.953502 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:48:01.953510 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:48:01.953523 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:48:01.953531 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:48:01.953539 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:48:01.953546 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:48:01.953554 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:48:01.953564 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:48:01.953574 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:48:01.953584 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:48:01.953593 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:48:01.953607 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:48:01.953617 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:48:01.953627 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:48:01.953637 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:48:01.953646 kernel: iommu: Default domain type: Translated Sep 10 00:48:01.953657 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:48:01.953667 kernel: efivars: Registered efivars operations Sep 10 00:48:01.953677 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:48:01.953687 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:48:01.953696 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 10 00:48:01.953711 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 10 00:48:01.953721 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 10 00:48:01.953731 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 10 00:48:01.953893 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:48:01.954053 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:48:01.954236 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:48:01.954277 kernel: vgaarb: loaded Sep 10 00:48:01.954285 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:48:01.954298 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:48:01.954306 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:48:01.954314 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:48:01.954322 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:48:01.954330 kernel: pnp: PnP ACPI init Sep 10 00:48:01.954483 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:48:01.954496 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:48:01.954504 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:48:01.954516 kernel: NET: Registered PF_INET protocol family Sep 10 00:48:01.954524 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:48:01.954532 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:48:01.954540 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:48:01.954548 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:48:01.954555 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:48:01.954563 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:48:01.954571 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:48:01.954579 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:48:01.954589 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:48:01.954597 kernel: NET: Registered PF_XDP protocol family Sep 10 00:48:01.954726 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 10 00:48:01.954857 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 10 00:48:01.954984 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:48:01.955110 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:48:01.955227 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:48:01.955404 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:48:01.955527 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:48:01.955643 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 10 00:48:01.955654 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:48:01.955662 kernel: Initialise system trusted keyrings Sep 10 00:48:01.955670 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:48:01.955678 kernel: Key type asymmetric registered Sep 10 00:48:01.955686 kernel: Asymmetric key parser 'x509' registered Sep 10 00:48:01.955693 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 10 00:48:01.955701 kernel: io scheduler mq-deadline registered Sep 10 00:48:01.955713 kernel: io scheduler kyber registered Sep 10 00:48:01.955721 kernel: io scheduler bfq registered Sep 10 00:48:01.955728 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:48:01.955737 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:48:01.955745 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:48:01.955752 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:48:01.955760 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:48:01.955768 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:48:01.955776 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:48:01.955787 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:48:01.955794 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:48:01.955924 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:48:01.955936 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:48:01.956066 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:48:01.956187 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:48:01 UTC (1757465281) Sep 10 00:48:01.956324 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:48:01.956335 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 00:48:01.956348 kernel: efifb: probing for efifb Sep 10 00:48:01.956356 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 10 00:48:01.956364 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 10 00:48:01.956381 kernel: efifb: scrolling: redraw Sep 10 00:48:01.956389 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 10 00:48:01.956397 kernel: Console: switching to colour frame buffer device 100x37 Sep 10 00:48:01.956422 kernel: fb0: EFI VGA frame buffer device Sep 10 00:48:01.956433 kernel: pstore: Using crash dump compression: deflate Sep 10 00:48:01.956441 kernel: pstore: Registered efi_pstore as persistent store backend Sep 10 00:48:01.956452 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:48:01.956459 kernel: Segment Routing with IPv6 Sep 10 00:48:01.956467 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:48:01.956475 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:48:01.956483 kernel: Key type dns_resolver registered Sep 10 00:48:01.956491 kernel: IPI shorthand broadcast: enabled Sep 10 00:48:01.956499 kernel: sched_clock: Marking stable (874002663, 127042272)->(1079931387, -78886452) Sep 10 00:48:01.956507 kernel: registered taskstats version 1 Sep 10 00:48:01.956515 kernel: Loading compiled-in X.509 certificates Sep 10 00:48:01.956526 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: a614f1c62f27a560d677bbf0283703118c9005ec' Sep 10 00:48:01.956534 kernel: Key type .fscrypt registered Sep 10 00:48:01.956542 kernel: Key type fscrypt-provisioning registered Sep 10 00:48:01.956550 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:48:01.956558 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:48:01.956566 kernel: ima: No architecture policies found Sep 10 00:48:01.956576 kernel: clk: Disabling unused clocks Sep 10 00:48:01.956586 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 10 00:48:01.956599 kernel: Write protecting the kernel read-only data: 36864k Sep 10 00:48:01.956609 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 10 00:48:01.956619 kernel: Run /init as init process Sep 10 00:48:01.956629 kernel: with arguments: Sep 10 00:48:01.956639 kernel: /init Sep 10 00:48:01.956649 kernel: with environment: Sep 10 00:48:01.956658 kernel: HOME=/ Sep 10 00:48:01.956668 kernel: TERM=linux Sep 10 00:48:01.956679 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:48:01.956694 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:48:01.956707 systemd[1]: Detected virtualization kvm. Sep 10 00:48:01.956718 systemd[1]: Detected architecture x86-64. Sep 10 00:48:01.956728 systemd[1]: Running in initrd. Sep 10 00:48:01.956745 systemd[1]: No hostname configured, using default hostname. Sep 10 00:48:01.956756 systemd[1]: Hostname set to . Sep 10 00:48:01.956767 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:48:01.956777 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:48:01.956788 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:48:01.956798 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:48:01.956809 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:48:01.956817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:48:01.956829 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:48:01.956837 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:48:01.956848 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:48:01.956856 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:48:01.956865 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:48:01.956874 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:48:01.956882 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:48:01.956894 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:48:01.956902 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:48:01.956911 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:48:01.956920 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:48:01.956928 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:48:01.956937 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:48:01.956945 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:48:01.956955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:48:01.956967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:48:01.956983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:48:01.956992 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:48:01.957001 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:48:01.957010 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:48:01.957019 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:48:01.957027 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:48:01.957036 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:48:01.957044 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:48:01.957056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:48:01.957064 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:48:01.957073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:48:01.957081 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:48:01.957105 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:48:01.957139 systemd-journald[192]: Collecting audit messages is disabled. Sep 10 00:48:01.957158 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:48:01.957167 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:48:01.957176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:48:01.957188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:48:01.957197 systemd-journald[192]: Journal started Sep 10 00:48:01.957215 systemd-journald[192]: Runtime Journal (/run/log/journal/bef47828f078418ab1c6f9f84cbe0c91) is 6.0M, max 48.3M, 42.2M free. Sep 10 00:48:01.943282 systemd-modules-load[194]: Inserted module 'overlay' Sep 10 00:48:01.958933 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:48:01.966017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:48:01.966301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:48:01.973268 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:48:01.974610 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:48:01.977725 kernel: Bridge firewalling registered Sep 10 00:48:01.977633 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:48:01.977730 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 10 00:48:01.980822 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:48:01.982315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:48:01.992536 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:48:01.995669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:48:01.998551 dracut-cmdline[220]: dracut-dracut-053 Sep 10 00:48:02.001847 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:48:02.006381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:48:02.051488 systemd-resolved[237]: Positive Trust Anchors: Sep 10 00:48:02.051504 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:48:02.051535 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:48:02.062293 systemd-resolved[237]: Defaulting to hostname 'linux'. Sep 10 00:48:02.064315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:48:02.064495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:48:02.103272 kernel: SCSI subsystem initialized Sep 10 00:48:02.114265 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:48:02.127277 kernel: iscsi: registered transport (tcp) Sep 10 00:48:02.153348 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:48:02.153443 kernel: QLogic iSCSI HBA Driver Sep 10 00:48:02.221431 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:48:02.238449 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:48:02.266148 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:48:02.266210 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:48:02.266239 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:48:02.311295 kernel: raid6: avx2x4 gen() 26690 MB/s Sep 10 00:48:02.328273 kernel: raid6: avx2x2 gen() 25851 MB/s Sep 10 00:48:02.345627 kernel: raid6: avx2x1 gen() 16185 MB/s Sep 10 00:48:02.345649 kernel: raid6: using algorithm avx2x4 gen() 26690 MB/s Sep 10 00:48:02.363438 kernel: raid6: .... xor() 7458 MB/s, rmw enabled Sep 10 00:48:02.363458 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:48:02.385267 kernel: xor: automatically using best checksumming function avx Sep 10 00:48:02.552284 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:48:02.567613 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:48:02.580443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:48:02.593104 systemd-udevd[411]: Using default interface naming scheme 'v255'. Sep 10 00:48:02.597951 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:48:02.606486 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:48:02.622429 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Sep 10 00:48:02.656132 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:48:02.671396 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:48:02.745821 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:48:02.754461 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:48:02.770632 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:48:02.771394 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:48:02.775182 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:48:02.777601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:48:02.791275 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 00:48:02.802492 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:48:02.807759 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:48:02.807861 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:48:02.807889 kernel: GPT:9289727 != 19775487 Sep 10 00:48:02.807914 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:48:02.807936 kernel: GPT:9289727 != 19775487 Sep 10 00:48:02.807958 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:48:02.807980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:48:02.787397 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:48:02.802337 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:48:02.817893 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:48:02.817931 kernel: AES CTR mode by8 optimization enabled Sep 10 00:48:02.832270 kernel: libata version 3.00 loaded. Sep 10 00:48:02.840278 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:48:02.842268 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:48:02.844280 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Sep 10 00:48:02.845810 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:48:02.846062 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:48:02.847910 kernel: BTRFS: device fsid 47ffa5df-7ab2-4f1a-b68f-595717991426 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (455) Sep 10 00:48:02.852268 kernel: scsi host0: ahci Sep 10 00:48:02.853259 kernel: scsi host1: ahci Sep 10 00:48:02.854260 kernel: scsi host2: ahci Sep 10 00:48:02.855279 kernel: scsi host3: ahci Sep 10 00:48:02.857281 kernel: scsi host4: ahci Sep 10 00:48:02.859493 kernel: scsi host5: ahci Sep 10 00:48:02.859670 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 10 00:48:02.859682 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 10 00:48:02.860422 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 10 00:48:02.861696 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:48:02.867011 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 10 00:48:02.867027 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 10 00:48:02.867037 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 10 00:48:02.878722 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:48:02.890097 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:48:02.890198 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:48:02.900416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:48:02.909395 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:48:02.910496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:48:02.910555 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:48:02.912034 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:48:02.912514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:48:02.912567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:48:02.917093 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:48:02.918510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:48:02.931140 disk-uuid[562]: Primary Header is updated. Sep 10 00:48:02.931140 disk-uuid[562]: Secondary Entries is updated. Sep 10 00:48:02.931140 disk-uuid[562]: Secondary Header is updated. Sep 10 00:48:02.935268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:48:02.940268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:48:02.940704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:48:02.947270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:48:02.948431 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:48:02.973058 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:48:03.169271 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:48:03.169340 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:48:03.170277 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:48:03.171265 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:48:03.171288 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:48:03.172262 kernel: ata3.00: applying bridge limits Sep 10 00:48:03.172277 kernel: ata3.00: configured for UDMA/100 Sep 10 00:48:03.173262 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:48:03.178276 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:48:03.178299 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:48:03.226807 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:48:03.227156 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:48:03.241268 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:48:03.986041 disk-uuid[565]: The operation has completed successfully. Sep 10 00:48:03.987153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:48:04.015798 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:48:04.015929 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:48:04.036399 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:48:04.042075 sh[593]: Success Sep 10 00:48:04.054273 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:48:04.088073 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:48:04.102039 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:48:04.104743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:48:04.120563 kernel: BTRFS info (device dm-0): first mount of filesystem 47ffa5df-7ab2-4f1a-b68f-595717991426 Sep 10 00:48:04.120626 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:48:04.120643 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:48:04.121532 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:48:04.122842 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:48:04.127615 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:48:04.128472 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:48:04.139431 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:48:04.141782 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:48:04.151746 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:48:04.151773 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:48:04.151784 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:48:04.155286 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:48:04.165446 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:48:04.167166 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:48:04.176622 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:48:04.182540 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:48:04.241285 ignition[680]: Ignition 2.19.0 Sep 10 00:48:04.241299 ignition[680]: Stage: fetch-offline Sep 10 00:48:04.241347 ignition[680]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:48:04.241358 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:48:04.241451 ignition[680]: parsed url from cmdline: "" Sep 10 00:48:04.241455 ignition[680]: no config URL provided Sep 10 00:48:04.241460 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:48:04.241470 ignition[680]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:48:04.241497 ignition[680]: op(1): [started] loading QEMU firmware config module Sep 10 00:48:04.241503 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:48:04.253159 ignition[680]: op(1): [finished] loading QEMU firmware config module Sep 10 00:48:04.290283 ignition[680]: parsing config with SHA512: 6b465b659169437660b0a9bd25026a96ad457d591d6324363b4c3f9b0dfc7aafe71a2c06cf5acfe0e72685d6163e705d9caba1c7ee00ec84e79557893e46b92e Sep 10 00:48:04.290535 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:48:04.299157 unknown[680]: fetched base config from "system" Sep 10 00:48:04.299415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:48:04.302372 unknown[680]: fetched user config from "qemu" Sep 10 00:48:04.303741 ignition[680]: fetch-offline: fetch-offline passed Sep 10 00:48:04.303847 ignition[680]: Ignition finished successfully Sep 10 00:48:04.308643 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:48:04.326401 systemd-networkd[781]: lo: Link UP Sep 10 00:48:04.326413 systemd-networkd[781]: lo: Gained carrier Sep 10 00:48:04.328130 systemd-networkd[781]: Enumeration completed Sep 10 00:48:04.328234 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:48:04.328566 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:48:04.328571 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:48:04.329554 systemd-networkd[781]: eth0: Link UP Sep 10 00:48:04.329562 systemd-networkd[781]: eth0: Gained carrier Sep 10 00:48:04.329569 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:48:04.331644 systemd[1]: Reached target network.target - Network. Sep 10 00:48:04.334463 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:48:04.341436 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:48:04.349310 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:48:04.355656 ignition[785]: Ignition 2.19.0 Sep 10 00:48:04.355668 ignition[785]: Stage: kargs Sep 10 00:48:04.355839 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:48:04.355851 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:48:04.356691 ignition[785]: kargs: kargs passed Sep 10 00:48:04.356738 ignition[785]: Ignition finished successfully Sep 10 00:48:04.360571 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:48:04.374454 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:48:04.388537 ignition[794]: Ignition 2.19.0 Sep 10 00:48:04.388550 ignition[794]: Stage: disks Sep 10 00:48:04.388722 ignition[794]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:48:04.388734 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:48:04.389813 ignition[794]: disks: disks passed Sep 10 00:48:04.391725 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:48:04.389863 ignition[794]: Ignition finished successfully Sep 10 00:48:04.393783 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:48:04.395616 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:48:04.397587 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:48:04.399606 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:48:04.401775 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:48:04.410463 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:48:04.424728 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:48:04.431074 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:48:04.445358 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:48:04.534270 kernel: EXT4-fs (vda9): mounted filesystem 0a9bf3c7-f8cd-4d40-b949-283957ba2f96 r/w with ordered data mode. Quota mode: none. Sep 10 00:48:04.535094 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:48:04.537122 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:48:04.553330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:48:04.555083 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:48:04.556170 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:48:04.556208 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:48:04.568140 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Sep 10 00:48:04.568163 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:48:04.568175 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:48:04.568185 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:48:04.556229 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:48:04.563890 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:48:04.569840 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:48:04.573426 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:48:04.574731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:48:04.610428 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:48:04.615288 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:48:04.620032 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:48:04.625831 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:48:04.720885 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:48:04.740361 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:48:04.741170 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:48:04.749282 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:48:04.768992 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:48:04.771640 ignition[925]: INFO : Ignition 2.19.0 Sep 10 00:48:04.771640 ignition[925]: INFO : Stage: mount Sep 10 00:48:04.771640 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:48:04.771640 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:48:04.777613 ignition[925]: INFO : mount: mount passed Sep 10 00:48:04.777613 ignition[925]: INFO : Ignition finished successfully Sep 10 00:48:04.774653 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:48:04.782428 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:48:05.120096 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:48:05.132410 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:48:05.139288 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Sep 10 00:48:05.139344 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:48:05.140574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:48:05.140591 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:48:05.144268 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:48:05.145892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:48:05.179371 ignition[955]: INFO : Ignition 2.19.0 Sep 10 00:48:05.179371 ignition[955]: INFO : Stage: files Sep 10 00:48:05.181641 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:48:05.181641 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:48:05.181641 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:48:05.181641 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:48:05.181641 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:48:05.188598 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:48:05.188598 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:48:05.188598 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:48:05.188598 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:48:05.188598 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:48:05.188598 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:48:05.188598 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:48:05.184236 unknown[955]: wrote ssh authorized keys file for user: core Sep 10 00:48:05.236076 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:48:05.375758 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:48:05.375758 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:48:05.379948 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:48:05.678600 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:48:05.794633 systemd-networkd[781]: eth0: Gained IPv6LL Sep 10 00:48:06.097929 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:48:06.097929 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 10 00:48:06.102458 ignition[955]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:48:06.128907 ignition[955]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:48:06.133915 ignition[955]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:48:06.135625 ignition[955]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:48:06.135625 ignition[955]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:48:06.135625 ignition[955]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:48:06.135625 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:48:06.135625 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:48:06.135625 ignition[955]: INFO : files: files passed Sep 10 00:48:06.135625 ignition[955]: INFO : Ignition finished successfully Sep 10 00:48:06.137538 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:48:06.154411 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:48:06.156376 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:48:06.158707 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:48:06.158841 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:48:06.168211 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:48:06.171124 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:48:06.171124 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:48:06.174867 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:48:06.177910 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:48:06.180976 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:48:06.199513 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:48:06.226987 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:48:06.227134 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:48:06.228319 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:48:06.230425 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:48:06.230776 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:48:06.235510 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:48:06.257365 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:48:06.265513 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:48:06.277843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:48:06.277995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:48:06.280296 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:48:06.280761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:48:06.280876 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:48:06.287214 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:48:06.289179 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:48:06.289336 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:48:06.289659 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:48:06.289962 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:48:06.290310 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:48:06.290779 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:48:06.291099 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:48:06.291604 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:48:06.291904 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:48:06.292198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:48:06.292325 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:48:06.293037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:48:06.293535 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:48:06.293826 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:48:06.293939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:48:06.313054 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:48:06.313166 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:48:06.314486 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:48:06.314596 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:48:06.314919 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:48:06.315153 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:48:06.317392 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:48:06.321207 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:48:06.323973 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:48:06.324348 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:48:06.324493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:48:06.324812 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:48:06.324935 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:48:06.331453 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:48:06.331620 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:48:06.332596 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:48:06.332730 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:48:06.342458 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:48:06.343445 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:48:06.345950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:48:06.347022 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:48:06.349312 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:48:06.349428 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:48:06.356719 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:48:06.356875 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:48:06.360785 ignition[1010]: INFO : Ignition 2.19.0 Sep 10 00:48:06.360785 ignition[1010]: INFO : Stage: umount Sep 10 00:48:06.360785 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:48:06.360785 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:48:06.360785 ignition[1010]: INFO : umount: umount passed Sep 10 00:48:06.360785 ignition[1010]: INFO : Ignition finished successfully Sep 10 00:48:06.362467 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:48:06.362635 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:48:06.364900 systemd[1]: Stopped target network.target - Network. Sep 10 00:48:06.366714 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:48:06.366782 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:48:06.368523 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:48:06.368583 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:48:06.370332 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:48:06.370400 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:48:06.372475 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:48:06.372535 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:48:06.374654 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:48:06.376506 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:48:06.380098 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:48:06.384325 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 10 00:48:06.386743 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:48:06.386884 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:48:06.389624 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:48:06.389791 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:48:06.392389 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:48:06.392474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:48:06.401425 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:48:06.403409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:48:06.403472 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:48:06.405951 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:48:06.406018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:48:06.408397 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:48:06.408449 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:48:06.409647 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:48:06.409699 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:48:06.411885 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:48:06.422601 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:48:06.422756 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:48:06.428150 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:48:06.428374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:48:06.430554 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:48:06.430609 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:48:06.432642 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:48:06.432686 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:48:06.434625 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:48:06.434677 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:48:06.436815 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:48:06.436867 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:48:06.438812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:48:06.438863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:48:06.456543 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:48:06.459201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:48:06.460446 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:48:06.463427 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 00:48:06.464762 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:48:06.467875 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:48:06.469029 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:48:06.471820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:48:06.472868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:48:06.475863 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:48:06.477191 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:48:06.560846 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:48:06.561008 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:48:06.564788 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:48:06.564910 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:48:06.564997 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:48:06.581428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:48:06.588030 systemd[1]: Switching root. Sep 10 00:48:06.619681 systemd-journald[192]: Journal stopped Sep 10 00:48:07.823528 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 10 00:48:07.823615 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:48:07.823637 kernel: SELinux: policy capability open_perms=1 Sep 10 00:48:07.823652 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:48:07.823671 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:48:07.823687 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:48:07.823708 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:48:07.823772 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:48:07.823789 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:48:07.823828 kernel: audit: type=1403 audit(1757465287.083:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:48:07.823857 systemd[1]: Successfully loaded SELinux policy in 41.354ms. Sep 10 00:48:07.823890 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.942ms. Sep 10 00:48:07.823907 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:48:07.823924 systemd[1]: Detected virtualization kvm. Sep 10 00:48:07.823965 systemd[1]: Detected architecture x86-64. Sep 10 00:48:07.824022 systemd[1]: Detected first boot. Sep 10 00:48:07.824042 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:48:07.825863 zram_generator::config[1074]: No configuration found. Sep 10 00:48:07.825887 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:48:07.825904 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:48:07.825922 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:48:07.825939 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:48:07.825973 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:48:07.825996 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:48:07.826012 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:48:07.826046 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:48:07.826083 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:48:07.826104 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:48:07.826121 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:48:07.826137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:48:07.826157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:48:07.826208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:48:07.826261 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:48:07.826291 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:48:07.826334 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:48:07.826354 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 00:48:07.826371 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:48:07.826388 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:48:07.826406 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:48:07.826423 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:48:07.826482 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:48:07.826533 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:48:07.826554 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:48:07.826570 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:48:07.826588 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:48:07.826604 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:48:07.826621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:48:07.826637 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:48:07.826654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:48:07.826676 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:48:07.826692 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:48:07.826711 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:48:07.826727 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:48:07.826745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:48:07.826772 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:48:07.826802 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:48:07.826821 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:48:07.826843 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:48:07.826861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:48:07.826879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:48:07.826896 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:48:07.826913 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:48:07.826930 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:48:07.826947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:48:07.826963 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:48:07.826980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:48:07.827002 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:48:07.827027 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 10 00:48:07.827067 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 10 00:48:07.827086 kernel: fuse: init (API version 7.39) Sep 10 00:48:07.827103 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:48:07.827120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:48:07.827138 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:48:07.827155 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:48:07.827222 systemd-journald[1158]: Collecting audit messages is disabled. Sep 10 00:48:07.827273 kernel: loop: module loaded Sep 10 00:48:07.827291 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:48:07.827317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:48:07.827342 systemd-journald[1158]: Journal started Sep 10 00:48:07.827387 systemd-journald[1158]: Runtime Journal (/run/log/journal/bef47828f078418ab1c6f9f84cbe0c91) is 6.0M, max 48.3M, 42.2M free. Sep 10 00:48:07.839322 kernel: ACPI: bus type drm_connector registered Sep 10 00:48:07.842542 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:48:07.847296 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:48:07.849222 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:48:07.850530 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:48:07.851739 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:48:07.853037 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:48:07.854307 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:48:07.855762 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:48:07.857328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:48:07.860133 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:48:07.860379 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:48:07.861875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:48:07.862096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:48:07.863574 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:48:07.863789 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:48:07.865357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:48:07.865585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:48:07.867285 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:48:07.867501 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:48:07.868907 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:48:07.869139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:48:07.870636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:48:07.872340 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:48:07.874027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:48:07.889529 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:48:07.901338 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:48:07.904347 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:48:07.905683 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:48:07.908393 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:48:07.914486 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:48:07.916335 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:48:07.920275 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:48:07.921741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:48:07.925513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:48:07.934028 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:48:07.938906 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:48:07.941596 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:48:07.950444 systemd-journald[1158]: Time spent on flushing to /var/log/journal/bef47828f078418ab1c6f9f84cbe0c91 is 30.296ms for 984 entries. Sep 10 00:48:07.950444 systemd-journald[1158]: System Journal (/var/log/journal/bef47828f078418ab1c6f9f84cbe0c91) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:48:08.097819 systemd-journald[1158]: Received client request to flush runtime journal. Sep 10 00:48:07.944331 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:48:07.953407 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:48:07.959343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:48:08.102431 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:48:08.104799 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:48:08.113692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:48:08.121141 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 10 00:48:08.127841 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Sep 10 00:48:08.127865 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Sep 10 00:48:08.134662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:48:08.142460 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:48:08.172878 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:48:08.206564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:48:08.226934 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 10 00:48:08.226956 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 10 00:48:08.234195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:48:09.067753 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:48:09.081619 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:48:09.111490 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Sep 10 00:48:09.129301 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:48:09.138421 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:48:09.154575 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:48:09.223609 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:48:09.250276 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1252) Sep 10 00:48:09.257399 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 10 00:48:09.288393 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:48:09.298188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:48:09.302279 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:48:09.314814 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:48:09.313661 systemd-networkd[1243]: lo: Link UP Sep 10 00:48:09.313666 systemd-networkd[1243]: lo: Gained carrier Sep 10 00:48:09.317977 systemd-networkd[1243]: Enumeration completed Sep 10 00:48:09.318226 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:48:09.319196 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:48:09.319205 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:48:09.320996 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 10 00:48:09.323168 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:48:09.323366 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:48:09.323551 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:48:09.324367 systemd-networkd[1243]: eth0: Link UP Sep 10 00:48:09.324379 systemd-networkd[1243]: eth0: Gained carrier Sep 10 00:48:09.324396 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:48:09.338832 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:48:09.345435 systemd-networkd[1243]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:48:09.365914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:48:09.384045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:48:09.385571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:48:09.455267 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:48:09.465688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:48:09.521541 kernel: kvm_amd: TSC scaling supported Sep 10 00:48:09.521599 kernel: kvm_amd: Nested Virtualization enabled Sep 10 00:48:09.521613 kernel: kvm_amd: Nested Paging enabled Sep 10 00:48:09.522806 kernel: kvm_amd: LBR virtualization supported Sep 10 00:48:09.522842 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 00:48:09.523555 kernel: kvm_amd: Virtual GIF supported Sep 10 00:48:09.545340 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:48:09.556324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:48:09.580533 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:48:09.593403 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:48:09.607637 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:48:09.644663 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:48:09.646462 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:48:09.657373 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:48:09.663671 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:48:09.700545 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:48:09.702129 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:48:09.703502 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:48:09.703538 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:48:09.704702 systemd[1]: Reached target machines.target - Containers. Sep 10 00:48:09.707368 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:48:09.724380 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:48:09.727008 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:48:09.728309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:48:09.729580 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:48:09.732624 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:48:09.737102 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:48:09.739739 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:48:09.748978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:48:09.754272 kernel: loop0: detected capacity change from 0 to 142488 Sep 10 00:48:09.764711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:48:09.765706 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:48:09.780281 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:48:09.814264 kernel: loop1: detected capacity change from 0 to 140768 Sep 10 00:48:09.855610 kernel: loop2: detected capacity change from 0 to 221472 Sep 10 00:48:09.888440 kernel: loop3: detected capacity change from 0 to 142488 Sep 10 00:48:09.902272 kernel: loop4: detected capacity change from 0 to 140768 Sep 10 00:48:09.913265 kernel: loop5: detected capacity change from 0 to 221472 Sep 10 00:48:09.918066 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:48:09.918948 (sd-merge)[1311]: Merged extensions into '/usr'. Sep 10 00:48:09.939092 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:48:09.939106 systemd[1]: Reloading... Sep 10 00:48:10.049279 zram_generator::config[1339]: No configuration found. Sep 10 00:48:10.135864 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:48:10.207230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:48:10.277171 systemd[1]: Reloading finished in 337 ms. Sep 10 00:48:10.298580 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:48:10.300217 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:48:10.312399 systemd[1]: Starting ensure-sysext.service... Sep 10 00:48:10.314488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:48:10.320227 systemd[1]: Reloading requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:48:10.320351 systemd[1]: Reloading... Sep 10 00:48:10.435038 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:48:10.435861 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:48:10.436977 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:48:10.437463 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Sep 10 00:48:10.437900 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Sep 10 00:48:10.443079 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:48:10.443098 systemd-tmpfiles[1384]: Skipping /boot Sep 10 00:48:10.464298 zram_generator::config[1412]: No configuration found. Sep 10 00:48:10.513279 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:48:10.513490 systemd-tmpfiles[1384]: Skipping /boot Sep 10 00:48:10.638001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:48:10.709177 systemd[1]: Reloading finished in 388 ms. Sep 10 00:48:10.730823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:48:10.745436 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:48:10.747881 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:48:10.750398 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:48:10.754207 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:48:10.759388 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:48:10.764980 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:48:10.765490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:48:10.767943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:48:10.772499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:48:10.777256 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:48:10.779423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:48:10.779551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:48:10.780524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:48:10.782479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:48:10.787383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:48:10.787662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:48:10.790982 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:48:10.793058 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:48:10.794516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:48:10.820230 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:48:10.826074 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:48:10.826348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:48:10.827313 augenrules[1492]: No rules Sep 10 00:48:10.827987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:48:10.830450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:48:10.833912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:48:10.845418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:48:10.846576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:48:10.848470 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:48:10.850326 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:48:10.854484 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:48:10.856318 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:48:10.858344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:48:10.858635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:48:10.860413 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:48:10.860652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:48:10.862234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:48:10.862467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:48:10.864169 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:48:10.864448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:48:10.869673 systemd[1]: Finished ensure-sysext.service. Sep 10 00:48:10.960893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:48:10.960999 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:48:10.970486 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:48:10.971670 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:48:10.972184 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:48:10.973955 systemd-resolved[1461]: Positive Trust Anchors: Sep 10 00:48:10.973973 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:48:10.974005 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:48:10.978070 systemd-resolved[1461]: Defaulting to hostname 'linux'. Sep 10 00:48:10.980336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:48:10.981681 systemd[1]: Reached target network.target - Network. Sep 10 00:48:10.982688 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:48:11.042456 systemd-networkd[1243]: eth0: Gained IPv6LL Sep 10 00:48:11.045824 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:48:11.046332 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:48:11.046379 systemd-timesyncd[1518]: Initial clock synchronization to Wed 2025-09-10 00:48:11.335730 UTC. Sep 10 00:48:11.047896 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:48:11.049476 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:48:11.050613 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:48:11.051794 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:48:11.053071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:48:11.054389 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:48:11.055698 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:48:11.055732 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:48:11.056733 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:48:11.058169 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:48:11.059462 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:48:11.060733 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:48:11.062586 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:48:11.065613 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:48:11.068051 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:48:11.073869 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:48:11.075109 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:48:11.076079 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:48:11.077354 systemd[1]: System is tainted: cgroupsv1 Sep 10 00:48:11.077429 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:48:11.077453 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:48:11.079887 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:48:11.082655 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:48:11.086092 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:48:11.091370 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:48:11.093885 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:48:11.095121 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:48:11.101360 jq[1528]: false Sep 10 00:48:11.100565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:48:11.107586 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:48:11.112387 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:48:11.115978 dbus-daemon[1527]: [system] SELinux support is enabled Sep 10 00:48:11.117401 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:48:11.120386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:48:11.126002 extend-filesystems[1530]: Found loop3 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found loop4 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found loop5 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found sr0 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda1 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda2 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda3 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found usr Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda4 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda6 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda7 Sep 10 00:48:11.126002 extend-filesystems[1530]: Found vda9 Sep 10 00:48:11.126002 extend-filesystems[1530]: Checking size of /dev/vda9 Sep 10 00:48:11.145823 extend-filesystems[1530]: Resized partition /dev/vda9 Sep 10 00:48:11.127364 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:48:11.135336 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:48:11.140086 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:48:11.147816 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:48:11.152329 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:48:11.149096 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:48:11.156852 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:48:11.159327 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:48:11.168294 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1257) Sep 10 00:48:11.168642 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:48:11.169018 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:48:11.178341 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:48:11.178679 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:48:11.183481 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:48:11.184015 jq[1560]: true Sep 10 00:48:11.183785 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:48:11.193853 update_engine[1553]: I20250910 00:48:11.193758 1553 main.cc:92] Flatcar Update Engine starting Sep 10 00:48:11.196172 update_engine[1553]: I20250910 00:48:11.195088 1553 update_check_scheduler.cc:74] Next update check in 8m40s Sep 10 00:48:11.200420 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:48:11.245061 jq[1571]: true Sep 10 00:48:11.251073 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:48:11.264549 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:48:11.288465 tar[1567]: linux-amd64/helm Sep 10 00:48:11.266216 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:48:11.266608 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:48:11.293183 systemd-logind[1548]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:48:11.296388 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:48:11.296388 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:48:11.296388 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:48:11.293217 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:48:11.317951 extend-filesystems[1530]: Resized filesystem in /dev/vda9 Sep 10 00:48:11.297168 systemd-logind[1548]: New seat seat0. Sep 10 00:48:11.299337 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:48:11.299701 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:48:11.306681 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:48:11.313057 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:48:11.326537 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:48:11.317638 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:48:11.318330 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:48:11.318464 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:48:11.325114 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:48:11.325228 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:48:11.327220 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:48:11.335608 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:48:11.344510 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:48:11.348861 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:48:11.419477 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:48:11.508404 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:48:11.519736 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:48:11.522632 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:48:11.530480 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:48:11.530782 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:48:11.536503 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:48:11.593822 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:48:11.668717 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:48:11.671952 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 00:48:11.673575 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:48:11.874322 containerd[1572]: time="2025-09-10T00:48:11.872814830Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:48:11.904610 containerd[1572]: time="2025-09-10T00:48:11.904576064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:48:11.906828 containerd[1572]: time="2025-09-10T00:48:11.906797470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:48:11.906895 containerd[1572]: time="2025-09-10T00:48:11.906881487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:48:11.906974 containerd[1572]: time="2025-09-10T00:48:11.906959904Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:48:11.907268 containerd[1572]: time="2025-09-10T00:48:11.907234950Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:48:11.907349 containerd[1572]: time="2025-09-10T00:48:11.907332513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:48:11.907497 containerd[1572]: time="2025-09-10T00:48:11.907478407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:48:11.907548 containerd[1572]: time="2025-09-10T00:48:11.907536796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:48:11.907865 containerd[1572]: time="2025-09-10T00:48:11.907844363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:48:11.907920 containerd[1572]: time="2025-09-10T00:48:11.907907892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:48:11.907970 containerd[1572]: time="2025-09-10T00:48:11.907957685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:48:11.908014 containerd[1572]: time="2025-09-10T00:48:11.908002379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:48:11.908200 containerd[1572]: time="2025-09-10T00:48:11.908183318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:48:11.908563 containerd[1572]: time="2025-09-10T00:48:11.908543294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:48:11.908807 containerd[1572]: time="2025-09-10T00:48:11.908788153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:48:11.908873 containerd[1572]: time="2025-09-10T00:48:11.908859777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:48:11.909041 containerd[1572]: time="2025-09-10T00:48:11.909023684Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:48:11.909177 containerd[1572]: time="2025-09-10T00:48:11.909160772Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:48:12.039796 tar[1567]: linux-amd64/LICENSE Sep 10 00:48:12.039893 tar[1567]: linux-amd64/README.md Sep 10 00:48:12.093500 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:48:12.330799 containerd[1572]: time="2025-09-10T00:48:12.330619242Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:48:12.330799 containerd[1572]: time="2025-09-10T00:48:12.330732823Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:48:12.330799 containerd[1572]: time="2025-09-10T00:48:12.330752476Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:48:12.330973 containerd[1572]: time="2025-09-10T00:48:12.330871747Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:48:12.330973 containerd[1572]: time="2025-09-10T00:48:12.330892842Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:48:12.331177 containerd[1572]: time="2025-09-10T00:48:12.331120328Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:48:12.331790 containerd[1572]: time="2025-09-10T00:48:12.331730606Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:48:12.332035 containerd[1572]: time="2025-09-10T00:48:12.331996265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:48:12.332035 containerd[1572]: time="2025-09-10T00:48:12.332019312Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:48:12.332035 containerd[1572]: time="2025-09-10T00:48:12.332033118Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:48:12.332108 containerd[1572]: time="2025-09-10T00:48:12.332048224Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332108 containerd[1572]: time="2025-09-10T00:48:12.332069235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332108 containerd[1572]: time="2025-09-10T00:48:12.332087279Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332108 containerd[1572]: time="2025-09-10T00:48:12.332106473Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332125741Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332143005Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332155837Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332170256Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332204857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332219111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332231392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332244774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332234 containerd[1572]: time="2025-09-10T00:48:12.332260242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332289714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332308577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332326600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332339731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332357162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332370336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332385244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332413781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332432697Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332464972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332476610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332503 containerd[1572]: time="2025-09-10T00:48:12.332493334Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332563782Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332583257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332595507Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332610238Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332620432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332638736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332663422Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:48:12.332872 containerd[1572]: time="2025-09-10T00:48:12.332685805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:48:12.333154 containerd[1572]: time="2025-09-10T00:48:12.333081707Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:48:12.333154 containerd[1572]: time="2025-09-10T00:48:12.333150443Z" level=info msg="Connect containerd service" Sep 10 00:48:12.333398 containerd[1572]: time="2025-09-10T00:48:12.333197875Z" level=info msg="using legacy CRI server" Sep 10 00:48:12.333398 containerd[1572]: time="2025-09-10T00:48:12.333206367Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:48:12.333398 containerd[1572]: time="2025-09-10T00:48:12.333348301Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:48:12.334084 containerd[1572]: time="2025-09-10T00:48:12.334040352Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:48:12.334497 containerd[1572]: time="2025-09-10T00:48:12.334394647Z" level=info msg="Start subscribing containerd event" Sep 10 00:48:12.334497 containerd[1572]: time="2025-09-10T00:48:12.334470680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:48:12.334497 containerd[1572]: time="2025-09-10T00:48:12.334533051Z" level=info msg="Start recovering state" Sep 10 00:48:12.334815 containerd[1572]: time="2025-09-10T00:48:12.334569988Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:48:12.334815 containerd[1572]: time="2025-09-10T00:48:12.334657564Z" level=info msg="Start event monitor" Sep 10 00:48:12.334815 containerd[1572]: time="2025-09-10T00:48:12.334688906Z" level=info msg="Start snapshots syncer" Sep 10 00:48:12.334815 containerd[1572]: time="2025-09-10T00:48:12.334703855Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:48:12.334815 containerd[1572]: time="2025-09-10T00:48:12.334713998Z" level=info msg="Start streaming server" Sep 10 00:48:12.335706 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:48:12.336645 containerd[1572]: time="2025-09-10T00:48:12.336218245Z" level=info msg="containerd successfully booted in 0.465295s" Sep 10 00:48:13.108577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:13.110540 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:48:13.111877 systemd[1]: Startup finished in 6.378s (kernel) + 6.067s (userspace) = 12.446s. Sep 10 00:48:13.115987 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:48:13.800950 kubelet[1661]: E0910 00:48:13.800861 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:48:13.806024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:48:13.806436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:48:14.651841 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:48:14.663553 systemd[1]: Started sshd@0-10.0.0.156:22-10.0.0.1:45332.service - OpenSSH per-connection server daemon (10.0.0.1:45332). Sep 10 00:48:14.713351 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 45332 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:48:14.715667 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:48:14.727489 systemd-logind[1548]: New session 1 of user core. Sep 10 00:48:14.728914 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:48:14.737703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:48:14.753400 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:48:14.763668 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:48:14.768968 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:14.910094 systemd[1680]: Queued start job for default target default.target. Sep 10 00:48:14.910572 systemd[1680]: Created slice app.slice - User Application Slice. Sep 10 00:48:14.910591 systemd[1680]: Reached target paths.target - Paths. Sep 10 00:48:14.910605 systemd[1680]: Reached target timers.target - Timers. Sep 10 00:48:14.922411 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:48:14.929717 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:48:14.929832 systemd[1680]: Reached target sockets.target - Sockets. Sep 10 00:48:14.929852 systemd[1680]: Reached target basic.target - Basic System. Sep 10 00:48:14.929905 systemd[1680]: Reached target default.target - Main User Target. Sep 10 00:48:14.929952 systemd[1680]: Startup finished in 151ms. Sep 10 00:48:14.982945 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:48:14.984632 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:48:15.042572 systemd[1]: Started sshd@1-10.0.0.156:22-10.0.0.1:45338.service - OpenSSH per-connection server daemon (10.0.0.1:45338). Sep 10 00:48:15.077861 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 45338 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:48:15.080103 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:48:15.085217 systemd-logind[1548]: New session 2 of user core. Sep 10 00:48:15.095812 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:48:15.169644 sshd[1692]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:15.175542 systemd[1]: Started sshd@2-10.0.0.156:22-10.0.0.1:45340.service - OpenSSH per-connection server daemon (10.0.0.1:45340). Sep 10 00:48:15.176040 systemd[1]: sshd@1-10.0.0.156:22-10.0.0.1:45338.service: Deactivated successfully. Sep 10 00:48:15.179298 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:48:15.180648 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:48:15.182203 systemd-logind[1548]: Removed session 2. Sep 10 00:48:15.214626 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 45340 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:48:15.216712 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:48:15.222363 systemd-logind[1548]: New session 3 of user core. Sep 10 00:48:15.236651 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:48:15.291014 sshd[1697]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:15.297571 systemd[1]: Started sshd@3-10.0.0.156:22-10.0.0.1:45354.service - OpenSSH per-connection server daemon (10.0.0.1:45354). Sep 10 00:48:15.298191 systemd[1]: sshd@2-10.0.0.156:22-10.0.0.1:45340.service: Deactivated successfully. Sep 10 00:48:15.301647 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:48:15.302740 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:48:15.310775 systemd-logind[1548]: Removed session 3. Sep 10 00:48:15.346786 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 45354 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:48:15.348970 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:48:15.355145 systemd-logind[1548]: New session 4 of user core. Sep 10 00:48:15.365662 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:48:15.427406 sshd[1705]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:15.441839 systemd[1]: Started sshd@4-10.0.0.156:22-10.0.0.1:45362.service - OpenSSH per-connection server daemon (10.0.0.1:45362). Sep 10 00:48:15.443669 systemd[1]: sshd@3-10.0.0.156:22-10.0.0.1:45354.service: Deactivated successfully. Sep 10 00:48:15.446416 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:48:15.447198 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:48:15.449068 systemd-logind[1548]: Removed session 4. Sep 10 00:48:15.473229 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 45362 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:48:15.475333 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:48:15.480885 systemd-logind[1548]: New session 5 of user core. Sep 10 00:48:15.490602 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:48:15.555924 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:48:15.556593 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:48:15.579538 sudo[1720]: pam_unix(sudo:session): session closed for user root Sep 10 00:48:15.582293 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:15.591504 systemd[1]: Started sshd@5-10.0.0.156:22-10.0.0.1:45366.service - OpenSSH per-connection server daemon (10.0.0.1:45366). Sep 10 00:48:15.592003 systemd[1]: sshd@4-10.0.0.156:22-10.0.0.1:45362.service: Deactivated successfully. Sep 10 00:48:15.594393 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:48:15.595914 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:48:15.596966 systemd-logind[1548]: Removed session 5. Sep 10 00:48:15.620571 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 45366 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:48:15.622606 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:48:15.628074 systemd-logind[1548]: New session 6 of user core. Sep 10 00:48:15.648754 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:48:15.709729 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:48:15.710210 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:48:15.716549 sudo[1730]: pam_unix(sudo:session): session closed for user root Sep 10 00:48:15.724278 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:48:15.724645 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:48:15.747575 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:48:15.749437 auditctl[1733]: No rules Sep 10 00:48:15.751128 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:48:15.751602 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:48:15.754037 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:48:15.789280 augenrules[1752]: No rules Sep 10 00:48:15.791783 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:48:15.793381 sudo[1729]: pam_unix(sudo:session): session closed for user root Sep 10 00:48:15.795956 sshd[1722]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:15.810729 systemd[1]: Started sshd@6-10.0.0.156:22-10.0.0.1:45380.service - OpenSSH per-connection server daemon (10.0.0.1:45380). Sep 10 00:48:15.811615 systemd[1]: sshd@5-10.0.0.156:22-10.0.0.1:45366.service: Deactivated successfully. Sep 10 00:48:15.814115 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:48:15.814928 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:48:15.816247 systemd-logind[1548]: Removed session 6. Sep 10 00:48:15.840374 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 45380 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:48:15.842204 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:48:15.846725 systemd-logind[1548]: New session 7 of user core. Sep 10 00:48:15.864566 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:48:15.919875 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:48:15.920224 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:48:16.636470 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:48:16.637196 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:48:17.232494 dockerd[1783]: time="2025-09-10T00:48:17.232400401Z" level=info msg="Starting up" Sep 10 00:48:18.986727 dockerd[1783]: time="2025-09-10T00:48:18.986626503Z" level=info msg="Loading containers: start." Sep 10 00:48:19.163286 kernel: Initializing XFRM netlink socket Sep 10 00:48:19.250319 systemd-networkd[1243]: docker0: Link UP Sep 10 00:48:19.270875 dockerd[1783]: time="2025-09-10T00:48:19.270826183Z" level=info msg="Loading containers: done." Sep 10 00:48:19.287561 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck681403173-merged.mount: Deactivated successfully. Sep 10 00:48:19.346417 dockerd[1783]: time="2025-09-10T00:48:19.346360630Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:48:19.346573 dockerd[1783]: time="2025-09-10T00:48:19.346549852Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:48:19.346735 dockerd[1783]: time="2025-09-10T00:48:19.346713416Z" level=info msg="Daemon has completed initialization" Sep 10 00:48:19.399080 dockerd[1783]: time="2025-09-10T00:48:19.398959430Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:48:19.399291 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:48:20.239956 containerd[1572]: time="2025-09-10T00:48:20.239910523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:48:21.646411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848480323.mount: Deactivated successfully. Sep 10 00:48:22.592706 containerd[1572]: time="2025-09-10T00:48:22.592612617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:22.593413 containerd[1572]: time="2025-09-10T00:48:22.593349202Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 10 00:48:22.595095 containerd[1572]: time="2025-09-10T00:48:22.595067225Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:22.598025 containerd[1572]: time="2025-09-10T00:48:22.597978083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:22.599026 containerd[1572]: time="2025-09-10T00:48:22.598981419Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.359026426s" Sep 10 00:48:22.599026 containerd[1572]: time="2025-09-10T00:48:22.599025699Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:48:22.599655 containerd[1572]: time="2025-09-10T00:48:22.599619947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:48:24.056442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:48:24.063405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:48:24.327101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:24.332474 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:48:24.712337 kubelet[2001]: E0910 00:48:24.712158 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:48:24.719179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:48:24.719576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:48:24.845020 containerd[1572]: time="2025-09-10T00:48:24.844933315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:24.845883 containerd[1572]: time="2025-09-10T00:48:24.845840822Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 10 00:48:24.847390 containerd[1572]: time="2025-09-10T00:48:24.847338175Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:24.851204 containerd[1572]: time="2025-09-10T00:48:24.851142497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:24.852244 containerd[1572]: time="2025-09-10T00:48:24.852189341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 2.25252471s" Sep 10 00:48:24.852244 containerd[1572]: time="2025-09-10T00:48:24.852241929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:48:24.852814 containerd[1572]: time="2025-09-10T00:48:24.852786120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:48:27.050061 containerd[1572]: time="2025-09-10T00:48:27.049954750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:27.051280 containerd[1572]: time="2025-09-10T00:48:27.050784351Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 10 00:48:27.052230 containerd[1572]: time="2025-09-10T00:48:27.052179695Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:27.055098 containerd[1572]: time="2025-09-10T00:48:27.055064849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:27.056259 containerd[1572]: time="2025-09-10T00:48:27.056199792Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 2.203368508s" Sep 10 00:48:27.056317 containerd[1572]: time="2025-09-10T00:48:27.056273669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:48:27.059725 containerd[1572]: time="2025-09-10T00:48:27.059634452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:48:29.362441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733780198.mount: Deactivated successfully. Sep 10 00:48:31.124416 containerd[1572]: time="2025-09-10T00:48:31.124340283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:31.125299 containerd[1572]: time="2025-09-10T00:48:31.125237190Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 10 00:48:31.126522 containerd[1572]: time="2025-09-10T00:48:31.126490099Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:31.129051 containerd[1572]: time="2025-09-10T00:48:31.128969241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:31.129519 containerd[1572]: time="2025-09-10T00:48:31.129478114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 4.069809206s" Sep 10 00:48:31.129519 containerd[1572]: time="2025-09-10T00:48:31.129511482Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:48:31.130071 containerd[1572]: time="2025-09-10T00:48:31.130032604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:48:31.717414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892131427.mount: Deactivated successfully. Sep 10 00:48:33.437656 containerd[1572]: time="2025-09-10T00:48:33.437566862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:33.438507 containerd[1572]: time="2025-09-10T00:48:33.438445949Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 10 00:48:33.440297 containerd[1572]: time="2025-09-10T00:48:33.440262722Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:33.443627 containerd[1572]: time="2025-09-10T00:48:33.443558033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:33.444603 containerd[1572]: time="2025-09-10T00:48:33.444560832Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.31449773s" Sep 10 00:48:33.444603 containerd[1572]: time="2025-09-10T00:48:33.444594218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:48:33.445136 containerd[1572]: time="2025-09-10T00:48:33.445112320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:48:34.312697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907206977.mount: Deactivated successfully. Sep 10 00:48:34.321109 containerd[1572]: time="2025-09-10T00:48:34.321067647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:34.321979 containerd[1572]: time="2025-09-10T00:48:34.321937581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 00:48:34.323059 containerd[1572]: time="2025-09-10T00:48:34.323027928Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:34.325281 containerd[1572]: time="2025-09-10T00:48:34.325253483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:34.325971 containerd[1572]: time="2025-09-10T00:48:34.325923149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 880.777285ms" Sep 10 00:48:34.325971 containerd[1572]: time="2025-09-10T00:48:34.325958975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:48:34.326546 containerd[1572]: time="2025-09-10T00:48:34.326517800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:48:34.969729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:48:34.980465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:48:35.217627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:35.222303 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:48:35.415205 kubelet[2090]: E0910 00:48:35.415033 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:48:35.419516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:48:35.419847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:48:35.772479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348999089.mount: Deactivated successfully. Sep 10 00:48:38.800378 containerd[1572]: time="2025-09-10T00:48:38.800303677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:38.801199 containerd[1572]: time="2025-09-10T00:48:38.801168758Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 10 00:48:38.802639 containerd[1572]: time="2025-09-10T00:48:38.802595844Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:38.806313 containerd[1572]: time="2025-09-10T00:48:38.806279530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:38.807435 containerd[1572]: time="2025-09-10T00:48:38.807388864Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.480841259s" Sep 10 00:48:38.807435 containerd[1572]: time="2025-09-10T00:48:38.807425172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:48:41.392228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:41.402525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:48:41.430339 systemd[1]: Reloading requested from client PID 2184 ('systemctl') (unit session-7.scope)... Sep 10 00:48:41.430357 systemd[1]: Reloading... Sep 10 00:48:41.520123 zram_generator::config[2226]: No configuration found. Sep 10 00:48:42.014706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:48:42.093327 systemd[1]: Reloading finished in 662 ms. Sep 10 00:48:42.137096 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:48:42.137230 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:48:42.137678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:42.140797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:48:42.737336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:42.743494 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:48:43.037506 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:48:43.037506 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:48:43.037506 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:48:43.037989 kubelet[2284]: I0910 00:48:43.037507 2284 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:48:43.583029 kubelet[2284]: I0910 00:48:43.582985 2284 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:48:43.583029 kubelet[2284]: I0910 00:48:43.583014 2284 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:48:43.583289 kubelet[2284]: I0910 00:48:43.583269 2284 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:48:43.602309 kubelet[2284]: E0910 00:48:43.602268 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:43.603032 kubelet[2284]: I0910 00:48:43.603006 2284 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:48:43.611555 kubelet[2284]: E0910 00:48:43.611515 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:48:43.611555 kubelet[2284]: I0910 00:48:43.611547 2284 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:48:43.618504 kubelet[2284]: I0910 00:48:43.618471 2284 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:48:43.619394 kubelet[2284]: I0910 00:48:43.619370 2284 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:48:43.619605 kubelet[2284]: I0910 00:48:43.619567 2284 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:48:43.619790 kubelet[2284]: I0910 00:48:43.619598 2284 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:48:43.619909 kubelet[2284]: I0910 00:48:43.619809 2284 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:48:43.619909 kubelet[2284]: I0910 00:48:43.619819 2284 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:48:43.619989 kubelet[2284]: I0910 00:48:43.619974 2284 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:48:43.622074 kubelet[2284]: I0910 00:48:43.622052 2284 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:48:43.622113 kubelet[2284]: I0910 00:48:43.622087 2284 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:48:43.622157 kubelet[2284]: I0910 00:48:43.622143 2284 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:48:43.622218 kubelet[2284]: I0910 00:48:43.622194 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:48:43.625616 kubelet[2284]: I0910 00:48:43.625591 2284 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:48:43.626474 kubelet[2284]: I0910 00:48:43.626087 2284 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:48:43.626474 kubelet[2284]: W0910 00:48:43.626181 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:48:43.627421 kubelet[2284]: W0910 00:48:43.626728 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:43.627421 kubelet[2284]: W0910 00:48:43.626789 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:43.627421 kubelet[2284]: E0910 00:48:43.626803 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:43.627421 kubelet[2284]: E0910 00:48:43.626841 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:43.629235 kubelet[2284]: I0910 00:48:43.629198 2284 server.go:1274] "Started kubelet" Sep 10 00:48:43.630594 kubelet[2284]: I0910 00:48:43.630517 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:48:43.631098 kubelet[2284]: I0910 00:48:43.631062 2284 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:48:43.631168 kubelet[2284]: I0910 00:48:43.631141 2284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:48:43.631322 kubelet[2284]: I0910 00:48:43.631277 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:48:43.633168 kubelet[2284]: I0910 00:48:43.632441 2284 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:48:43.634885 kubelet[2284]: I0910 00:48:43.634863 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:48:43.635175 kubelet[2284]: E0910 00:48:43.635142 2284 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:48:43.635231 kubelet[2284]: I0910 00:48:43.635214 2284 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:48:43.635571 kubelet[2284]: I0910 00:48:43.635544 2284 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:48:43.635733 kubelet[2284]: I0910 00:48:43.635698 2284 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:48:43.636664 kubelet[2284]: W0910 00:48:43.636604 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:43.637166 kubelet[2284]: E0910 00:48:43.637141 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:43.638808 kubelet[2284]: E0910 00:48:43.638389 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="200ms" Sep 10 00:48:43.638808 kubelet[2284]: E0910 00:48:43.636368 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.156:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.156:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c5667ffbd3b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:48:43.629171639 +0000 UTC m=+0.881754589,LastTimestamp:2025-09-10 00:48:43.629171639 +0000 UTC m=+0.881754589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:48:43.638808 kubelet[2284]: I0910 00:48:43.638537 2284 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:48:43.638808 kubelet[2284]: I0910 00:48:43.638622 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:48:43.640615 kubelet[2284]: I0910 00:48:43.640596 2284 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:48:43.641489 kubelet[2284]: E0910 00:48:43.641444 2284 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:48:43.654876 kubelet[2284]: I0910 00:48:43.654689 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:48:43.656099 kubelet[2284]: I0910 00:48:43.656068 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:48:43.656146 kubelet[2284]: I0910 00:48:43.656114 2284 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:48:43.656190 kubelet[2284]: I0910 00:48:43.656157 2284 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:48:43.656358 kubelet[2284]: E0910 00:48:43.656226 2284 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:48:43.659178 kubelet[2284]: W0910 00:48:43.658891 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:43.659178 kubelet[2284]: E0910 00:48:43.658927 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:43.666218 kubelet[2284]: I0910 00:48:43.666193 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:48:43.666218 kubelet[2284]: I0910 00:48:43.666212 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:48:43.666362 kubelet[2284]: I0910 00:48:43.666260 2284 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:48:43.735904 kubelet[2284]: E0910 00:48:43.735860 2284 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:48:43.757266 kubelet[2284]: E0910 00:48:43.757216 2284 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:48:43.836664 kubelet[2284]: E0910 00:48:43.836520 2284 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:48:43.839085 kubelet[2284]: E0910 00:48:43.839029 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="400ms" Sep 10 00:48:43.922519 kubelet[2284]: I0910 00:48:43.922445 2284 policy_none.go:49] "None policy: Start" Sep 10 00:48:43.923427 kubelet[2284]: I0910 00:48:43.923404 2284 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:48:43.923478 kubelet[2284]: I0910 00:48:43.923435 2284 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:48:43.931210 kubelet[2284]: I0910 00:48:43.931148 2284 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:48:43.931555 kubelet[2284]: I0910 00:48:43.931519 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:48:43.931625 kubelet[2284]: I0910 00:48:43.931573 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:48:43.932893 kubelet[2284]: I0910 00:48:43.932861 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:48:43.934196 kubelet[2284]: E0910 00:48:43.934172 2284 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:48:44.032904 kubelet[2284]: I0910 00:48:44.032856 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:48:44.033323 kubelet[2284]: E0910 00:48:44.033288 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Sep 10 00:48:44.037922 kubelet[2284]: I0910 00:48:44.037806 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3b1b210a7abb5c3a52c6d5071f2733b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b1b210a7abb5c3a52c6d5071f2733b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:44.037922 kubelet[2284]: I0910 00:48:44.037846 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:44.037922 kubelet[2284]: I0910 00:48:44.037866 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:48:44.037922 kubelet[2284]: I0910 00:48:44.037882 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:44.037922 kubelet[2284]: I0910 00:48:44.037899 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:44.038432 kubelet[2284]: I0910 00:48:44.037973 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3b1b210a7abb5c3a52c6d5071f2733b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b1b210a7abb5c3a52c6d5071f2733b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:44.038432 kubelet[2284]: I0910 00:48:44.038014 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3b1b210a7abb5c3a52c6d5071f2733b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a3b1b210a7abb5c3a52c6d5071f2733b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:44.038432 kubelet[2284]: I0910 00:48:44.038034 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:44.038432 kubelet[2284]: I0910 00:48:44.038051 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:44.235342 kubelet[2284]: I0910 00:48:44.235207 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:48:44.235667 kubelet[2284]: E0910 00:48:44.235632 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Sep 10 00:48:44.240027 kubelet[2284]: E0910 00:48:44.239996 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="800ms" Sep 10 00:48:44.264337 kubelet[2284]: E0910 00:48:44.264317 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:44.264948 containerd[1572]: time="2025-09-10T00:48:44.264908260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a3b1b210a7abb5c3a52c6d5071f2733b,Namespace:kube-system,Attempt:0,}" Sep 10 00:48:44.266067 kubelet[2284]: E0910 00:48:44.266034 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:44.266528 containerd[1572]: time="2025-09-10T00:48:44.266491880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:48:44.266814 kubelet[2284]: E0910 00:48:44.266789 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:44.267102 containerd[1572]: time="2025-09-10T00:48:44.267072743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:48:44.641769 kubelet[2284]: I0910 00:48:44.639704 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:48:44.641910 kubelet[2284]: E0910 00:48:44.641760 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Sep 10 00:48:44.763237 kubelet[2284]: W0910 00:48:44.763129 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:44.763237 kubelet[2284]: E0910 00:48:44.763229 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:44.875703 kubelet[2284]: W0910 00:48:44.875617 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:44.875795 kubelet[2284]: E0910 00:48:44.875711 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:44.966268 kubelet[2284]: W0910 00:48:44.966102 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:44.966268 kubelet[2284]: E0910 00:48:44.966186 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:45.006898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464514602.mount: Deactivated successfully. Sep 10 00:48:45.014080 containerd[1572]: time="2025-09-10T00:48:45.014035827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:48:45.015291 containerd[1572]: time="2025-09-10T00:48:45.015224042Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:48:45.016155 containerd[1572]: time="2025-09-10T00:48:45.016093565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:48:45.017230 containerd[1572]: time="2025-09-10T00:48:45.017172417Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:48:45.018007 containerd[1572]: time="2025-09-10T00:48:45.017976823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:48:45.018986 containerd[1572]: time="2025-09-10T00:48:45.018934562Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:48:45.022260 containerd[1572]: time="2025-09-10T00:48:45.020420917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 10 00:48:45.023954 containerd[1572]: time="2025-09-10T00:48:45.023912873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:48:45.025892 containerd[1572]: time="2025-09-10T00:48:45.025832384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 759.251121ms" Sep 10 00:48:45.027540 containerd[1572]: time="2025-09-10T00:48:45.027483724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 760.354493ms" Sep 10 00:48:45.030386 containerd[1572]: time="2025-09-10T00:48:45.030339880Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.347019ms" Sep 10 00:48:45.041301 kubelet[2284]: E0910 00:48:45.041223 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="1.6s" Sep 10 00:48:45.075324 kubelet[2284]: W0910 00:48:45.075137 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.156:6443: connect: connection refused Sep 10 00:48:45.075324 kubelet[2284]: E0910 00:48:45.075225 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:48:45.192424 containerd[1572]: time="2025-09-10T00:48:45.192238110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:48:45.192572 containerd[1572]: time="2025-09-10T00:48:45.192451772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:48:45.192572 containerd[1572]: time="2025-09-10T00:48:45.192521782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:45.193083 containerd[1572]: time="2025-09-10T00:48:45.192708704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.193128286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.193183449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.193197886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.193317284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.192820723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.192901151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.192915808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:45.193636 containerd[1572]: time="2025-09-10T00:48:45.193039878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:45.260861 containerd[1572]: time="2025-09-10T00:48:45.260731036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c28e61b08f0ee462361eae6629085102e60e30fdc143bc208104c67de21aac2a\"" Sep 10 00:48:45.263267 kubelet[2284]: E0910 00:48:45.263221 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:45.270450 containerd[1572]: time="2025-09-10T00:48:45.270396667Z" level=info msg="CreateContainer within sandbox \"c28e61b08f0ee462361eae6629085102e60e30fdc143bc208104c67de21aac2a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:48:45.272363 containerd[1572]: time="2025-09-10T00:48:45.272323126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a3b1b210a7abb5c3a52c6d5071f2733b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfdb17832b8ddc61c97c6d1eba70671fda52135cd4444ee0757ece193d45e8e0\"" Sep 10 00:48:45.273120 containerd[1572]: time="2025-09-10T00:48:45.273097403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"56247037bb8eabddb8f2bf67dbc0bf4c9211977628d3f55630704ebd7376e85c\"" Sep 10 00:48:45.273330 kubelet[2284]: E0910 00:48:45.273299 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:45.274437 kubelet[2284]: E0910 00:48:45.274419 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:45.275904 containerd[1572]: time="2025-09-10T00:48:45.275870506Z" level=info msg="CreateContainer within sandbox \"dfdb17832b8ddc61c97c6d1eba70671fda52135cd4444ee0757ece193d45e8e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:48:45.278386 containerd[1572]: time="2025-09-10T00:48:45.278344447Z" level=info msg="CreateContainer within sandbox \"56247037bb8eabddb8f2bf67dbc0bf4c9211977628d3f55630704ebd7376e85c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:48:45.295754 containerd[1572]: time="2025-09-10T00:48:45.295694489Z" level=info msg="CreateContainer within sandbox \"c28e61b08f0ee462361eae6629085102e60e30fdc143bc208104c67de21aac2a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"759e10156b73657f501de5c2da7fc019ebcc23fc2b7c371901306ade8b52f2d2\"" Sep 10 00:48:45.296602 containerd[1572]: time="2025-09-10T00:48:45.296565756Z" level=info msg="StartContainer for \"759e10156b73657f501de5c2da7fc019ebcc23fc2b7c371901306ade8b52f2d2\"" Sep 10 00:48:45.306560 containerd[1572]: time="2025-09-10T00:48:45.306500341Z" level=info msg="CreateContainer within sandbox \"56247037bb8eabddb8f2bf67dbc0bf4c9211977628d3f55630704ebd7376e85c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"070b07a670003c46ecd6f8a75ce38c0bb3e975c7ea87785d4eb6d481b78cee7e\"" Sep 10 00:48:45.306942 containerd[1572]: time="2025-09-10T00:48:45.306911099Z" level=info msg="StartContainer for \"070b07a670003c46ecd6f8a75ce38c0bb3e975c7ea87785d4eb6d481b78cee7e\"" Sep 10 00:48:45.309943 containerd[1572]: time="2025-09-10T00:48:45.309837629Z" level=info msg="CreateContainer within sandbox \"dfdb17832b8ddc61c97c6d1eba70671fda52135cd4444ee0757ece193d45e8e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12d80d02c909c0a6e0deb20045237d5ef9b7cdb077a55bf57f0d848557ecd290\"" Sep 10 00:48:45.310508 containerd[1572]: time="2025-09-10T00:48:45.310474100Z" level=info msg="StartContainer for \"12d80d02c909c0a6e0deb20045237d5ef9b7cdb077a55bf57f0d848557ecd290\"" Sep 10 00:48:45.381943 containerd[1572]: time="2025-09-10T00:48:45.381893534Z" level=info msg="StartContainer for \"759e10156b73657f501de5c2da7fc019ebcc23fc2b7c371901306ade8b52f2d2\" returns successfully" Sep 10 00:48:45.382094 containerd[1572]: time="2025-09-10T00:48:45.382071854Z" level=info msg="StartContainer for \"070b07a670003c46ecd6f8a75ce38c0bb3e975c7ea87785d4eb6d481b78cee7e\" returns successfully" Sep 10 00:48:45.405406 containerd[1572]: time="2025-09-10T00:48:45.405180408Z" level=info msg="StartContainer for \"12d80d02c909c0a6e0deb20045237d5ef9b7cdb077a55bf57f0d848557ecd290\" returns successfully" Sep 10 00:48:45.447291 kubelet[2284]: I0910 00:48:45.445832 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:48:45.447291 kubelet[2284]: E0910 00:48:45.446399 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Sep 10 00:48:45.671038 kubelet[2284]: E0910 00:48:45.670897 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:45.675112 kubelet[2284]: E0910 00:48:45.674970 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:45.677396 kubelet[2284]: E0910 00:48:45.677223 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:46.645224 kubelet[2284]: E0910 00:48:46.645134 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:48:46.679781 kubelet[2284]: E0910 00:48:46.679720 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:46.679954 kubelet[2284]: E0910 00:48:46.679926 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:46.932428 kubelet[2284]: E0910 00:48:46.932276 2284 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 10 00:48:47.048477 kubelet[2284]: I0910 00:48:47.048445 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:48:47.055722 kubelet[2284]: I0910 00:48:47.055697 2284 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:48:47.055838 kubelet[2284]: E0910 00:48:47.055727 2284 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:48:47.625891 kubelet[2284]: I0910 00:48:47.625815 2284 apiserver.go:52] "Watching apiserver" Sep 10 00:48:47.635899 kubelet[2284]: I0910 00:48:47.635860 2284 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:48:47.876564 kubelet[2284]: E0910 00:48:47.876409 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:48.461605 systemd[1]: Reloading requested from client PID 2562 ('systemctl') (unit session-7.scope)... Sep 10 00:48:48.461622 systemd[1]: Reloading... Sep 10 00:48:48.534276 zram_generator::config[2604]: No configuration found. Sep 10 00:48:48.655293 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:48:48.682862 kubelet[2284]: E0910 00:48:48.682817 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:48.739403 systemd[1]: Reloading finished in 277 ms. Sep 10 00:48:48.778603 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:48:48.778758 kubelet[2284]: I0910 00:48:48.778628 2284 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:48:48.805630 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:48:48.805998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:48.818501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:48:49.017713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:48:49.022889 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:48:49.068812 kubelet[2656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:48:49.068812 kubelet[2656]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:48:49.068812 kubelet[2656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:48:49.069369 kubelet[2656]: I0910 00:48:49.068851 2656 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:48:49.075452 kubelet[2656]: I0910 00:48:49.075420 2656 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:48:49.075452 kubelet[2656]: I0910 00:48:49.075445 2656 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:48:49.075764 kubelet[2656]: I0910 00:48:49.075737 2656 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:48:49.077036 kubelet[2656]: I0910 00:48:49.077006 2656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:48:49.078788 kubelet[2656]: I0910 00:48:49.078732 2656 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:48:49.082549 kubelet[2656]: E0910 00:48:49.082511 2656 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:48:49.082549 kubelet[2656]: I0910 00:48:49.082548 2656 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:48:49.087427 kubelet[2656]: I0910 00:48:49.087388 2656 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:48:49.088134 kubelet[2656]: I0910 00:48:49.088107 2656 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:48:49.088372 kubelet[2656]: I0910 00:48:49.088330 2656 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:48:49.088582 kubelet[2656]: I0910 00:48:49.088371 2656 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:48:49.088671 kubelet[2656]: I0910 00:48:49.088598 2656 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:48:49.088671 kubelet[2656]: I0910 00:48:49.088612 2656 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:48:49.088671 kubelet[2656]: I0910 00:48:49.088648 2656 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:48:49.088825 kubelet[2656]: I0910 00:48:49.088804 2656 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:48:49.088849 kubelet[2656]: I0910 00:48:49.088830 2656 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:48:49.088893 kubelet[2656]: I0910 00:48:49.088874 2656 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:48:49.088918 kubelet[2656]: I0910 00:48:49.088900 2656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:48:49.092162 kubelet[2656]: I0910 00:48:49.090348 2656 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:48:49.092162 kubelet[2656]: I0910 00:48:49.090697 2656 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:48:49.092162 kubelet[2656]: I0910 00:48:49.091127 2656 server.go:1274] "Started kubelet" Sep 10 00:48:49.092162 kubelet[2656]: I0910 00:48:49.091774 2656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:48:49.092323 kubelet[2656]: I0910 00:48:49.092192 2656 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:48:49.096047 kubelet[2656]: I0910 00:48:49.092659 2656 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:48:49.096047 kubelet[2656]: I0910 00:48:49.093196 2656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:48:49.097955 kubelet[2656]: I0910 00:48:49.097926 2656 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:48:49.104588 kubelet[2656]: I0910 00:48:49.102447 2656 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:48:49.104588 kubelet[2656]: I0910 00:48:49.102767 2656 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:48:49.104588 kubelet[2656]: I0910 00:48:49.102898 2656 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:48:49.104588 kubelet[2656]: I0910 00:48:49.103013 2656 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:48:49.104945 kubelet[2656]: I0910 00:48:49.104898 2656 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:48:49.105282 kubelet[2656]: I0910 00:48:49.104985 2656 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:48:49.106178 kubelet[2656]: E0910 00:48:49.106064 2656 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:48:49.110297 kubelet[2656]: I0910 00:48:49.109509 2656 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:48:49.114522 kubelet[2656]: I0910 00:48:49.114432 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:48:49.116372 kubelet[2656]: I0910 00:48:49.116341 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:48:49.116452 kubelet[2656]: I0910 00:48:49.116374 2656 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:48:49.116452 kubelet[2656]: I0910 00:48:49.116408 2656 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:48:49.116508 kubelet[2656]: E0910 00:48:49.116457 2656 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:48:49.158095 kubelet[2656]: I0910 00:48:49.158066 2656 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:48:49.158095 kubelet[2656]: I0910 00:48:49.158085 2656 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:48:49.158095 kubelet[2656]: I0910 00:48:49.158103 2656 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:48:49.158303 kubelet[2656]: I0910 00:48:49.158261 2656 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:48:49.158303 kubelet[2656]: I0910 00:48:49.158272 2656 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:48:49.158303 kubelet[2656]: I0910 00:48:49.158289 2656 policy_none.go:49] "None policy: Start" Sep 10 00:48:49.158878 kubelet[2656]: I0910 00:48:49.158844 2656 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:48:49.158878 kubelet[2656]: I0910 00:48:49.158875 2656 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:48:49.159055 kubelet[2656]: I0910 00:48:49.159041 2656 state_mem.go:75] "Updated machine memory state" Sep 10 00:48:49.160652 kubelet[2656]: I0910 00:48:49.160632 2656 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:48:49.160843 kubelet[2656]: I0910 00:48:49.160828 2656 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:48:49.160880 kubelet[2656]: I0910 00:48:49.160844 2656 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:48:49.161249 kubelet[2656]: I0910 00:48:49.161214 2656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:48:49.224390 kubelet[2656]: E0910 00:48:49.224349 2656 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:49.268750 kubelet[2656]: I0910 00:48:49.268629 2656 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:48:49.274430 kubelet[2656]: I0910 00:48:49.274398 2656 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:48:49.274540 kubelet[2656]: I0910 00:48:49.274498 2656 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:48:49.404830 kubelet[2656]: I0910 00:48:49.404781 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:49.404830 kubelet[2656]: I0910 00:48:49.404820 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:48:49.404830 kubelet[2656]: I0910 00:48:49.404836 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3b1b210a7abb5c3a52c6d5071f2733b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b1b210a7abb5c3a52c6d5071f2733b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:49.405011 kubelet[2656]: I0910 00:48:49.404865 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3b1b210a7abb5c3a52c6d5071f2733b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a3b1b210a7abb5c3a52c6d5071f2733b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:49.405011 kubelet[2656]: I0910 00:48:49.404882 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:49.405011 kubelet[2656]: I0910 00:48:49.404900 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:49.405011 kubelet[2656]: I0910 00:48:49.404914 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3b1b210a7abb5c3a52c6d5071f2733b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a3b1b210a7abb5c3a52c6d5071f2733b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:49.405011 kubelet[2656]: I0910 00:48:49.404927 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:49.405161 kubelet[2656]: I0910 00:48:49.404940 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:48:49.524559 kubelet[2656]: E0910 00:48:49.524378 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:49.524559 kubelet[2656]: E0910 00:48:49.524451 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:49.524559 kubelet[2656]: E0910 00:48:49.524517 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:50.090120 kubelet[2656]: I0910 00:48:50.090076 2656 apiserver.go:52] "Watching apiserver" Sep 10 00:48:50.103577 kubelet[2656]: I0910 00:48:50.103524 2656 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:48:50.132141 kubelet[2656]: E0910 00:48:50.131989 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:50.135390 kubelet[2656]: E0910 00:48:50.132468 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:50.139769 kubelet[2656]: E0910 00:48:50.138784 2656 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:48:50.139769 kubelet[2656]: E0910 00:48:50.138975 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:50.152601 kubelet[2656]: I0910 00:48:50.152526 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.152503721 podStartE2EDuration="1.152503721s" podCreationTimestamp="2025-09-10 00:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:48:50.151939252 +0000 UTC m=+1.124829473" watchObservedRunningTime="2025-09-10 00:48:50.152503721 +0000 UTC m=+1.125393942" Sep 10 00:48:50.161675 kubelet[2656]: I0910 00:48:50.161611 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.161591947 podStartE2EDuration="1.161591947s" podCreationTimestamp="2025-09-10 00:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:48:50.161510069 +0000 UTC m=+1.134400320" watchObservedRunningTime="2025-09-10 00:48:50.161591947 +0000 UTC m=+1.134482168" Sep 10 00:48:50.168471 kubelet[2656]: I0910 00:48:50.168389 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.168368656 podStartE2EDuration="3.168368656s" podCreationTimestamp="2025-09-10 00:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:48:50.167766416 +0000 UTC m=+1.140656657" watchObservedRunningTime="2025-09-10 00:48:50.168368656 +0000 UTC m=+1.141258877" Sep 10 00:48:51.133767 kubelet[2656]: E0910 00:48:51.133705 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:51.134353 kubelet[2656]: E0910 00:48:51.133927 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:52.769552 kubelet[2656]: E0910 00:48:52.769517 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:54.508877 kubelet[2656]: I0910 00:48:54.508833 2656 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:48:54.509539 kubelet[2656]: I0910 00:48:54.509519 2656 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:48:54.509589 containerd[1572]: time="2025-09-10T00:48:54.509302375Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:48:55.641699 kubelet[2656]: I0910 00:48:55.641645 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smh4k\" (UniqueName: \"kubernetes.io/projected/03dab171-4e12-45bb-9836-1499e82a9bf3-kube-api-access-smh4k\") pod \"tigera-operator-58fc44c59b-z52wg\" (UID: \"03dab171-4e12-45bb-9836-1499e82a9bf3\") " pod="tigera-operator/tigera-operator-58fc44c59b-z52wg" Sep 10 00:48:55.641699 kubelet[2656]: I0910 00:48:55.641702 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsk99\" (UniqueName: \"kubernetes.io/projected/1cfff290-facb-4ce8-96ad-fbc4a40b2ad2-kube-api-access-wsk99\") pod \"kube-proxy-5r9xz\" (UID: \"1cfff290-facb-4ce8-96ad-fbc4a40b2ad2\") " pod="kube-system/kube-proxy-5r9xz" Sep 10 00:48:55.642231 kubelet[2656]: I0910 00:48:55.641746 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03dab171-4e12-45bb-9836-1499e82a9bf3-var-lib-calico\") pod \"tigera-operator-58fc44c59b-z52wg\" (UID: \"03dab171-4e12-45bb-9836-1499e82a9bf3\") " pod="tigera-operator/tigera-operator-58fc44c59b-z52wg" Sep 10 00:48:55.642231 kubelet[2656]: I0910 00:48:55.641812 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1cfff290-facb-4ce8-96ad-fbc4a40b2ad2-kube-proxy\") pod \"kube-proxy-5r9xz\" (UID: \"1cfff290-facb-4ce8-96ad-fbc4a40b2ad2\") " pod="kube-system/kube-proxy-5r9xz" Sep 10 00:48:55.642231 kubelet[2656]: I0910 00:48:55.641837 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cfff290-facb-4ce8-96ad-fbc4a40b2ad2-xtables-lock\") pod \"kube-proxy-5r9xz\" (UID: \"1cfff290-facb-4ce8-96ad-fbc4a40b2ad2\") " pod="kube-system/kube-proxy-5r9xz" Sep 10 00:48:55.642231 kubelet[2656]: I0910 00:48:55.641858 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cfff290-facb-4ce8-96ad-fbc4a40b2ad2-lib-modules\") pod \"kube-proxy-5r9xz\" (UID: \"1cfff290-facb-4ce8-96ad-fbc4a40b2ad2\") " pod="kube-system/kube-proxy-5r9xz" Sep 10 00:48:55.919985 kubelet[2656]: E0910 00:48:55.919841 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:55.920630 containerd[1572]: time="2025-09-10T00:48:55.920562587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r9xz,Uid:1cfff290-facb-4ce8-96ad-fbc4a40b2ad2,Namespace:kube-system,Attempt:0,}" Sep 10 00:48:55.926166 containerd[1572]: time="2025-09-10T00:48:55.926129984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-z52wg,Uid:03dab171-4e12-45bb-9836-1499e82a9bf3,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:48:56.041640 containerd[1572]: time="2025-09-10T00:48:56.041496408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:48:56.041640 containerd[1572]: time="2025-09-10T00:48:56.041571608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:48:56.041640 containerd[1572]: time="2025-09-10T00:48:56.041617181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:56.041882 containerd[1572]: time="2025-09-10T00:48:56.041726338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:56.042102 containerd[1572]: time="2025-09-10T00:48:56.042027570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:48:56.042102 containerd[1572]: time="2025-09-10T00:48:56.042076300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:48:56.042175 containerd[1572]: time="2025-09-10T00:48:56.042102781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:56.042268 containerd[1572]: time="2025-09-10T00:48:56.042225659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:56.091265 containerd[1572]: time="2025-09-10T00:48:56.091205064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r9xz,Uid:1cfff290-facb-4ce8-96ad-fbc4a40b2ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdc31ae26262562a7de2e1229f98813f9710144d47d499193c37691020acfa02\"" Sep 10 00:48:56.092858 kubelet[2656]: E0910 00:48:56.092099 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:56.094970 containerd[1572]: time="2025-09-10T00:48:56.094932632Z" level=info msg="CreateContainer within sandbox \"cdc31ae26262562a7de2e1229f98813f9710144d47d499193c37691020acfa02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:48:56.110665 containerd[1572]: time="2025-09-10T00:48:56.110617909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-z52wg,Uid:03dab171-4e12-45bb-9836-1499e82a9bf3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"859868409945866eb356ff71005cc98bfbc989b80ab37268781440d01cab9958\"" Sep 10 00:48:56.112223 containerd[1572]: time="2025-09-10T00:48:56.112176968Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:48:56.117481 containerd[1572]: time="2025-09-10T00:48:56.117437336Z" level=info msg="CreateContainer within sandbox \"cdc31ae26262562a7de2e1229f98813f9710144d47d499193c37691020acfa02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7041b6dc2aebc89dc9cbf9b91804d5efa53692c5b3c2d34014db1e5051dd4870\"" Sep 10 00:48:56.119279 containerd[1572]: time="2025-09-10T00:48:56.118056397Z" level=info msg="StartContainer for \"7041b6dc2aebc89dc9cbf9b91804d5efa53692c5b3c2d34014db1e5051dd4870\"" Sep 10 00:48:56.220673 containerd[1572]: time="2025-09-10T00:48:56.220506445Z" level=info msg="StartContainer for \"7041b6dc2aebc89dc9cbf9b91804d5efa53692c5b3c2d34014db1e5051dd4870\" returns successfully" Sep 10 00:48:56.703539 update_engine[1553]: I20250910 00:48:56.703403 1553 update_attempter.cc:509] Updating boot flags... Sep 10 00:48:56.732298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2965) Sep 10 00:48:56.776332 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2964) Sep 10 00:48:56.818279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2964) Sep 10 00:48:57.153057 kubelet[2656]: E0910 00:48:57.153021 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:57.166174 kubelet[2656]: I0910 00:48:57.164272 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5r9xz" podStartSLOduration=2.164250086 podStartE2EDuration="2.164250086s" podCreationTimestamp="2025-09-10 00:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:48:57.16335642 +0000 UTC m=+8.136246641" watchObservedRunningTime="2025-09-10 00:48:57.164250086 +0000 UTC m=+8.137140307" Sep 10 00:48:57.289142 kubelet[2656]: E0910 00:48:57.289091 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:57.471133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310400713.mount: Deactivated successfully. Sep 10 00:48:57.830426 containerd[1572]: time="2025-09-10T00:48:57.830367419Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:57.831366 containerd[1572]: time="2025-09-10T00:48:57.831325280Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 10 00:48:57.832676 containerd[1572]: time="2025-09-10T00:48:57.832638878Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:57.834883 containerd[1572]: time="2025-09-10T00:48:57.834834216Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:48:57.835573 containerd[1572]: time="2025-09-10T00:48:57.835530478Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.723298987s" Sep 10 00:48:57.835573 containerd[1572]: time="2025-09-10T00:48:57.835562189Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 10 00:48:57.837573 containerd[1572]: time="2025-09-10T00:48:57.837544880Z" level=info msg="CreateContainer within sandbox \"859868409945866eb356ff71005cc98bfbc989b80ab37268781440d01cab9958\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:48:57.849682 containerd[1572]: time="2025-09-10T00:48:57.849629264Z" level=info msg="CreateContainer within sandbox \"859868409945866eb356ff71005cc98bfbc989b80ab37268781440d01cab9958\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fbd5694d6c5b41e71c9b07a6bed6293c5b5a5526a0375f0d7f957862b5c48a0d\"" Sep 10 00:48:57.850218 containerd[1572]: time="2025-09-10T00:48:57.850187017Z" level=info msg="StartContainer for \"fbd5694d6c5b41e71c9b07a6bed6293c5b5a5526a0375f0d7f957862b5c48a0d\"" Sep 10 00:48:57.916636 containerd[1572]: time="2025-09-10T00:48:57.916582771Z" level=info msg="StartContainer for \"fbd5694d6c5b41e71c9b07a6bed6293c5b5a5526a0375f0d7f957862b5c48a0d\" returns successfully" Sep 10 00:48:58.156529 kubelet[2656]: E0910 00:48:58.156380 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:58.156529 kubelet[2656]: E0910 00:48:58.156462 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:00.233615 kubelet[2656]: E0910 00:49:00.233559 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:00.266763 kubelet[2656]: I0910 00:49:00.264828 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-z52wg" podStartSLOduration=3.5400376639999998 podStartE2EDuration="5.264809879s" podCreationTimestamp="2025-09-10 00:48:55 +0000 UTC" firstStartedPulling="2025-09-10 00:48:56.111627665 +0000 UTC m=+7.084517876" lastFinishedPulling="2025-09-10 00:48:57.83639987 +0000 UTC m=+8.809290091" observedRunningTime="2025-09-10 00:48:58.338792502 +0000 UTC m=+9.311682723" watchObservedRunningTime="2025-09-10 00:49:00.264809879 +0000 UTC m=+11.237700100" Sep 10 00:49:02.774126 kubelet[2656]: E0910 00:49:02.774079 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:03.515751 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 10 00:49:03.520541 sshd[1758]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:03.533721 systemd[1]: sshd@6-10.0.0.156:22-10.0.0.1:45380.service: Deactivated successfully. Sep 10 00:49:03.533862 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:49:03.536514 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:49:03.537742 systemd-logind[1548]: Removed session 7. Sep 10 00:49:08.627297 kubelet[2656]: I0910 00:49:08.627139 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a337c1a-2836-4dc7-90b0-5b25b03f6457-tigera-ca-bundle\") pod \"calico-typha-699ddbdfdc-m4l6d\" (UID: \"9a337c1a-2836-4dc7-90b0-5b25b03f6457\") " pod="calico-system/calico-typha-699ddbdfdc-m4l6d" Sep 10 00:49:08.627297 kubelet[2656]: I0910 00:49:08.627223 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a337c1a-2836-4dc7-90b0-5b25b03f6457-typha-certs\") pod \"calico-typha-699ddbdfdc-m4l6d\" (UID: \"9a337c1a-2836-4dc7-90b0-5b25b03f6457\") " pod="calico-system/calico-typha-699ddbdfdc-m4l6d" Sep 10 00:49:08.627297 kubelet[2656]: I0910 00:49:08.627283 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvr7\" (UniqueName: \"kubernetes.io/projected/9a337c1a-2836-4dc7-90b0-5b25b03f6457-kube-api-access-9nvr7\") pod \"calico-typha-699ddbdfdc-m4l6d\" (UID: \"9a337c1a-2836-4dc7-90b0-5b25b03f6457\") " pod="calico-system/calico-typha-699ddbdfdc-m4l6d" Sep 10 00:49:08.860933 kubelet[2656]: E0910 00:49:08.860869 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:08.863694 containerd[1572]: time="2025-09-10T00:49:08.863634104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-699ddbdfdc-m4l6d,Uid:9a337c1a-2836-4dc7-90b0-5b25b03f6457,Namespace:calico-system,Attempt:0,}" Sep 10 00:49:09.060568 containerd[1572]: time="2025-09-10T00:49:09.059468009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:09.060568 containerd[1572]: time="2025-09-10T00:49:09.059567046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:09.060568 containerd[1572]: time="2025-09-10T00:49:09.059630358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:09.064317 containerd[1572]: time="2025-09-10T00:49:09.060667066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:09.129906 kubelet[2656]: I0910 00:49:09.129852 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-cni-net-dir\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.129906 kubelet[2656]: I0910 00:49:09.129898 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/17fd38e9-bf4f-4d30-8363-d1539f8edd17-node-certs\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.129906 kubelet[2656]: I0910 00:49:09.129914 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-cni-log-dir\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130091 kubelet[2656]: I0910 00:49:09.129931 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-flexvol-driver-host\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130091 kubelet[2656]: I0910 00:49:09.129949 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-xtables-lock\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130091 kubelet[2656]: I0910 00:49:09.129965 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-var-run-calico\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130091 kubelet[2656]: I0910 00:49:09.129982 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs7dh\" (UniqueName: \"kubernetes.io/projected/17fd38e9-bf4f-4d30-8363-d1539f8edd17-kube-api-access-xs7dh\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130091 kubelet[2656]: I0910 00:49:09.129997 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-var-lib-calico\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130212 kubelet[2656]: I0910 00:49:09.130013 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-lib-modules\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130212 kubelet[2656]: I0910 00:49:09.130028 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-policysync\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130212 kubelet[2656]: I0910 00:49:09.130042 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fd38e9-bf4f-4d30-8363-d1539f8edd17-tigera-ca-bundle\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.130212 kubelet[2656]: I0910 00:49:09.130057 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/17fd38e9-bf4f-4d30-8363-d1539f8edd17-cni-bin-dir\") pod \"calico-node-mpv6l\" (UID: \"17fd38e9-bf4f-4d30-8363-d1539f8edd17\") " pod="calico-system/calico-node-mpv6l" Sep 10 00:49:09.142509 containerd[1572]: time="2025-09-10T00:49:09.142448911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-699ddbdfdc-m4l6d,Uid:9a337c1a-2836-4dc7-90b0-5b25b03f6457,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f3e81475ace547c886392e9048e70802c5b36676b155f6df53cdb2733ae2dca\"" Sep 10 00:49:09.147272 kubelet[2656]: E0910 00:49:09.147200 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:09.153285 containerd[1572]: time="2025-09-10T00:49:09.153201026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:49:09.227018 kubelet[2656]: E0910 00:49:09.226963 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:09.230512 kubelet[2656]: I0910 00:49:09.230482 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dcf04d3-5b02-40dc-800c-0985ae063919-kubelet-dir\") pod \"csi-node-driver-wdgls\" (UID: \"0dcf04d3-5b02-40dc-800c-0985ae063919\") " pod="calico-system/csi-node-driver-wdgls" Sep 10 00:49:09.230656 kubelet[2656]: I0910 00:49:09.230530 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4nhz\" (UniqueName: \"kubernetes.io/projected/0dcf04d3-5b02-40dc-800c-0985ae063919-kube-api-access-f4nhz\") pod \"csi-node-driver-wdgls\" (UID: \"0dcf04d3-5b02-40dc-800c-0985ae063919\") " pod="calico-system/csi-node-driver-wdgls" Sep 10 00:49:09.231381 kubelet[2656]: I0910 00:49:09.230960 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0dcf04d3-5b02-40dc-800c-0985ae063919-registration-dir\") pod \"csi-node-driver-wdgls\" (UID: \"0dcf04d3-5b02-40dc-800c-0985ae063919\") " pod="calico-system/csi-node-driver-wdgls" Sep 10 00:49:09.231381 kubelet[2656]: I0910 00:49:09.230990 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0dcf04d3-5b02-40dc-800c-0985ae063919-socket-dir\") pod \"csi-node-driver-wdgls\" (UID: \"0dcf04d3-5b02-40dc-800c-0985ae063919\") " pod="calico-system/csi-node-driver-wdgls" Sep 10 00:49:09.231977 kubelet[2656]: E0910 00:49:09.231804 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.231977 kubelet[2656]: W0910 00:49:09.231825 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.231977 kubelet[2656]: E0910 00:49:09.231857 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.232578 kubelet[2656]: E0910 00:49:09.232402 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.232578 kubelet[2656]: W0910 00:49:09.232440 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.232578 kubelet[2656]: E0910 00:49:09.232532 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.235340 kubelet[2656]: E0910 00:49:09.234178 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.235340 kubelet[2656]: W0910 00:49:09.234211 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.235659 kubelet[2656]: E0910 00:49:09.235405 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.237497 kubelet[2656]: E0910 00:49:09.237459 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.237497 kubelet[2656]: W0910 00:49:09.237490 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.237648 kubelet[2656]: E0910 00:49:09.237550 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.239286 kubelet[2656]: E0910 00:49:09.238253 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.239286 kubelet[2656]: W0910 00:49:09.238271 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.239286 kubelet[2656]: E0910 00:49:09.238910 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.240708 kubelet[2656]: E0910 00:49:09.240685 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.240708 kubelet[2656]: W0910 00:49:09.240704 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.240919 kubelet[2656]: E0910 00:49:09.240876 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.241029 kubelet[2656]: E0910 00:49:09.241011 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.241029 kubelet[2656]: W0910 00:49:09.241023 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.241092 kubelet[2656]: E0910 00:49:09.241061 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.241314 kubelet[2656]: E0910 00:49:09.241288 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.241314 kubelet[2656]: W0910 00:49:09.241311 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.241521 kubelet[2656]: E0910 00:49:09.241449 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.242168 kubelet[2656]: E0910 00:49:09.241966 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.242168 kubelet[2656]: W0910 00:49:09.241980 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.242168 kubelet[2656]: E0910 00:49:09.242117 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.243079 kubelet[2656]: E0910 00:49:09.242410 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.243079 kubelet[2656]: W0910 00:49:09.242432 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.243079 kubelet[2656]: E0910 00:49:09.242721 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.244552 kubelet[2656]: E0910 00:49:09.244449 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.244552 kubelet[2656]: W0910 00:49:09.244474 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.244552 kubelet[2656]: E0910 00:49:09.244519 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.244819 kubelet[2656]: I0910 00:49:09.244609 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0dcf04d3-5b02-40dc-800c-0985ae063919-varrun\") pod \"csi-node-driver-wdgls\" (UID: \"0dcf04d3-5b02-40dc-800c-0985ae063919\") " pod="calico-system/csi-node-driver-wdgls" Sep 10 00:49:09.245180 kubelet[2656]: E0910 00:49:09.245162 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.245180 kubelet[2656]: W0910 00:49:09.245175 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.245302 kubelet[2656]: E0910 00:49:09.245211 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.245528 kubelet[2656]: E0910 00:49:09.245509 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.245602 kubelet[2656]: W0910 00:49:09.245527 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.245602 kubelet[2656]: E0910 00:49:09.245571 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.245909 kubelet[2656]: E0910 00:49:09.245891 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.245909 kubelet[2656]: W0910 00:49:09.245904 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.246006 kubelet[2656]: E0910 00:49:09.245927 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.246170 kubelet[2656]: E0910 00:49:09.246153 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.246170 kubelet[2656]: W0910 00:49:09.246168 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.246267 kubelet[2656]: E0910 00:49:09.246187 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.246524 kubelet[2656]: E0910 00:49:09.246496 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.246524 kubelet[2656]: W0910 00:49:09.246516 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.246614 kubelet[2656]: E0910 00:49:09.246535 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.247092 kubelet[2656]: E0910 00:49:09.246964 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.247092 kubelet[2656]: W0910 00:49:09.246980 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.247092 kubelet[2656]: E0910 00:49:09.247012 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.247308 kubelet[2656]: E0910 00:49:09.247288 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.247308 kubelet[2656]: W0910 00:49:09.247303 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.247534 kubelet[2656]: E0910 00:49:09.247412 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.247671 kubelet[2656]: E0910 00:49:09.247653 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.247671 kubelet[2656]: W0910 00:49:09.247668 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.247793 kubelet[2656]: E0910 00:49:09.247763 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.248108 kubelet[2656]: E0910 00:49:09.247975 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.248108 kubelet[2656]: W0910 00:49:09.247991 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.248108 kubelet[2656]: E0910 00:49:09.248040 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.248323 kubelet[2656]: E0910 00:49:09.248306 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.248412 kubelet[2656]: W0910 00:49:09.248397 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.248517 kubelet[2656]: E0910 00:49:09.248490 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.248786 kubelet[2656]: E0910 00:49:09.248770 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.248968 kubelet[2656]: W0910 00:49:09.248846 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.248968 kubelet[2656]: E0910 00:49:09.248878 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.249125 kubelet[2656]: E0910 00:49:09.249108 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.249199 kubelet[2656]: W0910 00:49:09.249184 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.249326 kubelet[2656]: E0910 00:49:09.249300 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.249639 kubelet[2656]: E0910 00:49:09.249613 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.249762 kubelet[2656]: W0910 00:49:09.249746 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.249943 kubelet[2656]: E0910 00:49:09.249866 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.250123 kubelet[2656]: E0910 00:49:09.250105 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.250123 kubelet[2656]: W0910 00:49:09.250121 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.250317 kubelet[2656]: E0910 00:49:09.250231 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.250440 kubelet[2656]: E0910 00:49:09.250425 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.250489 kubelet[2656]: W0910 00:49:09.250439 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.250489 kubelet[2656]: E0910 00:49:09.250468 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.250692 kubelet[2656]: E0910 00:49:09.250675 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.250692 kubelet[2656]: W0910 00:49:09.250689 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.250849 kubelet[2656]: E0910 00:49:09.250806 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.251010 kubelet[2656]: E0910 00:49:09.250928 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.251010 kubelet[2656]: W0910 00:49:09.250942 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.251083 kubelet[2656]: E0910 00:49:09.251034 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.251723 kubelet[2656]: E0910 00:49:09.251299 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.251723 kubelet[2656]: W0910 00:49:09.251314 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.251723 kubelet[2656]: E0910 00:49:09.251394 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.252541 kubelet[2656]: E0910 00:49:09.252522 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.252541 kubelet[2656]: W0910 00:49:09.252538 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.252679 kubelet[2656]: E0910 00:49:09.252661 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.252930 kubelet[2656]: E0910 00:49:09.252902 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.252930 kubelet[2656]: W0910 00:49:09.252920 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.253034 kubelet[2656]: E0910 00:49:09.252966 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.253520 kubelet[2656]: E0910 00:49:09.253205 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.253520 kubelet[2656]: W0910 00:49:09.253222 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.253520 kubelet[2656]: E0910 00:49:09.253281 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.253679 kubelet[2656]: E0910 00:49:09.253632 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.253679 kubelet[2656]: W0910 00:49:09.253649 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.254202 kubelet[2656]: E0910 00:49:09.253784 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.254202 kubelet[2656]: E0910 00:49:09.253917 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.254202 kubelet[2656]: W0910 00:49:09.253926 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.254202 kubelet[2656]: E0910 00:49:09.253963 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.254202 kubelet[2656]: E0910 00:49:09.254193 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.254202 kubelet[2656]: W0910 00:49:09.254205 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.254458 kubelet[2656]: E0910 00:49:09.254220 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.255438 kubelet[2656]: E0910 00:49:09.255370 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.255438 kubelet[2656]: W0910 00:49:09.255387 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.255438 kubelet[2656]: E0910 00:49:09.255400 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.259103 kubelet[2656]: E0910 00:49:09.259056 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.259103 kubelet[2656]: W0910 00:49:09.259075 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.259103 kubelet[2656]: E0910 00:49:09.259092 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.347203 kubelet[2656]: E0910 00:49:09.347066 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.347203 kubelet[2656]: W0910 00:49:09.347093 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.347203 kubelet[2656]: E0910 00:49:09.347117 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.348442 kubelet[2656]: E0910 00:49:09.348397 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.348442 kubelet[2656]: W0910 00:49:09.348430 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.348442 kubelet[2656]: E0910 00:49:09.348443 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.348846 kubelet[2656]: E0910 00:49:09.348682 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.348846 kubelet[2656]: W0910 00:49:09.348721 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.348846 kubelet[2656]: E0910 00:49:09.348732 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.349484 kubelet[2656]: E0910 00:49:09.349441 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.349484 kubelet[2656]: W0910 00:49:09.349457 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.349701 kubelet[2656]: E0910 00:49:09.349670 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.349992 kubelet[2656]: E0910 00:49:09.349971 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.350052 kubelet[2656]: W0910 00:49:09.349990 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.350336 kubelet[2656]: E0910 00:49:09.350132 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.350583 kubelet[2656]: E0910 00:49:09.350566 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.350583 kubelet[2656]: W0910 00:49:09.350582 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.350682 kubelet[2656]: E0910 00:49:09.350601 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.351189 kubelet[2656]: E0910 00:49:09.350929 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.351189 kubelet[2656]: W0910 00:49:09.350950 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.351189 kubelet[2656]: E0910 00:49:09.350991 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.353268 kubelet[2656]: E0910 00:49:09.353220 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.353324 kubelet[2656]: W0910 00:49:09.353290 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.353572 kubelet[2656]: E0910 00:49:09.353445 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.353640 kubelet[2656]: E0910 00:49:09.353603 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.353640 kubelet[2656]: W0910 00:49:09.353614 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.353773 kubelet[2656]: E0910 00:49:09.353727 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.354003 kubelet[2656]: E0910 00:49:09.353982 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.354070 kubelet[2656]: W0910 00:49:09.354006 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.354070 kubelet[2656]: E0910 00:49:09.354050 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.354365 kubelet[2656]: E0910 00:49:09.354335 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.354365 kubelet[2656]: W0910 00:49:09.354360 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.354474 kubelet[2656]: E0910 00:49:09.354419 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.354615 kubelet[2656]: E0910 00:49:09.354597 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.354615 kubelet[2656]: W0910 00:49:09.354611 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.354696 kubelet[2656]: E0910 00:49:09.354648 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.354913 kubelet[2656]: E0910 00:49:09.354879 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.354913 kubelet[2656]: W0910 00:49:09.354895 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.354990 kubelet[2656]: E0910 00:49:09.354977 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.355212 kubelet[2656]: E0910 00:49:09.355193 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.355212 kubelet[2656]: W0910 00:49:09.355206 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.355324 kubelet[2656]: E0910 00:49:09.355284 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.355539 kubelet[2656]: E0910 00:49:09.355516 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.355539 kubelet[2656]: W0910 00:49:09.355531 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.355612 kubelet[2656]: E0910 00:49:09.355567 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.355856 kubelet[2656]: E0910 00:49:09.355825 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.355856 kubelet[2656]: W0910 00:49:09.355845 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.356059 kubelet[2656]: E0910 00:49:09.356040 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.356206 kubelet[2656]: E0910 00:49:09.356186 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.356206 kubelet[2656]: W0910 00:49:09.356201 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.356366 kubelet[2656]: E0910 00:49:09.356292 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.356580 kubelet[2656]: E0910 00:49:09.356561 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.356580 kubelet[2656]: W0910 00:49:09.356576 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.356662 kubelet[2656]: E0910 00:49:09.356615 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.356968 kubelet[2656]: E0910 00:49:09.356948 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.356968 kubelet[2656]: W0910 00:49:09.356962 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.357052 kubelet[2656]: E0910 00:49:09.356994 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.357237 kubelet[2656]: E0910 00:49:09.357213 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.357237 kubelet[2656]: W0910 00:49:09.357227 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.357373 kubelet[2656]: E0910 00:49:09.357315 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.357469 containerd[1572]: time="2025-09-10T00:49:09.357427884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mpv6l,Uid:17fd38e9-bf4f-4d30-8363-d1539f8edd17,Namespace:calico-system,Attempt:0,}" Sep 10 00:49:09.357639 kubelet[2656]: E0910 00:49:09.357625 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.357639 kubelet[2656]: W0910 00:49:09.357639 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.357717 kubelet[2656]: E0910 00:49:09.357660 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.357968 kubelet[2656]: E0910 00:49:09.357942 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.357968 kubelet[2656]: W0910 00:49:09.357960 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.358062 kubelet[2656]: E0910 00:49:09.357981 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.358314 kubelet[2656]: E0910 00:49:09.358295 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.358314 kubelet[2656]: W0910 00:49:09.358309 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.358426 kubelet[2656]: E0910 00:49:09.358327 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.358738 kubelet[2656]: E0910 00:49:09.358705 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.358738 kubelet[2656]: W0910 00:49:09.358722 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.358738 kubelet[2656]: E0910 00:49:09.358736 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.366299 kubelet[2656]: E0910 00:49:09.366270 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.366299 kubelet[2656]: W0910 00:49:09.366291 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.366299 kubelet[2656]: E0910 00:49:09.366309 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.377989 kubelet[2656]: E0910 00:49:09.377950 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:09.377989 kubelet[2656]: W0910 00:49:09.377971 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:09.377989 kubelet[2656]: E0910 00:49:09.377985 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:09.418209 containerd[1572]: time="2025-09-10T00:49:09.417985906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:09.418209 containerd[1572]: time="2025-09-10T00:49:09.418049038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:09.418209 containerd[1572]: time="2025-09-10T00:49:09.418063498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:09.418638 containerd[1572]: time="2025-09-10T00:49:09.418570438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:09.456280 containerd[1572]: time="2025-09-10T00:49:09.456172927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mpv6l,Uid:17fd38e9-bf4f-4d30-8363-d1539f8edd17,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae3ae09167731d246ab503fe3a5159f25c8d0260bdc61561df13f29d2e9320d9\"" Sep 10 00:49:11.117421 kubelet[2656]: E0910 00:49:11.117346 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:12.430527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508663584.mount: Deactivated successfully. Sep 10 00:49:12.945832 containerd[1572]: time="2025-09-10T00:49:12.945773146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:12.947076 containerd[1572]: time="2025-09-10T00:49:12.947030073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 10 00:49:12.948275 containerd[1572]: time="2025-09-10T00:49:12.948224874Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:12.950945 containerd[1572]: time="2025-09-10T00:49:12.950910044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:12.952008 containerd[1572]: time="2025-09-10T00:49:12.951950725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.798710838s" Sep 10 00:49:12.952008 containerd[1572]: time="2025-09-10T00:49:12.952003063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 10 00:49:12.955760 containerd[1572]: time="2025-09-10T00:49:12.955727903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:49:12.977175 containerd[1572]: time="2025-09-10T00:49:12.977108326Z" level=info msg="CreateContainer within sandbox \"7f3e81475ace547c886392e9048e70802c5b36676b155f6df53cdb2733ae2dca\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:49:12.992851 containerd[1572]: time="2025-09-10T00:49:12.992795442Z" level=info msg="CreateContainer within sandbox \"7f3e81475ace547c886392e9048e70802c5b36676b155f6df53cdb2733ae2dca\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b6d086ef256435b3433a9f5caad675f4ff45f6862a698ca7ad7b33dae5452644\"" Sep 10 00:49:12.997399 containerd[1572]: time="2025-09-10T00:49:12.997298329Z" level=info msg="StartContainer for \"b6d086ef256435b3433a9f5caad675f4ff45f6862a698ca7ad7b33dae5452644\"" Sep 10 00:49:13.425843 kubelet[2656]: E0910 00:49:13.425117 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:13.428842 containerd[1572]: time="2025-09-10T00:49:13.428777521Z" level=info msg="StartContainer for \"b6d086ef256435b3433a9f5caad675f4ff45f6862a698ca7ad7b33dae5452644\" returns successfully" Sep 10 00:49:13.443712 kubelet[2656]: E0910 00:49:13.443675 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:13.449790 kubelet[2656]: E0910 00:49:13.449749 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.449790 kubelet[2656]: W0910 00:49:13.449784 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.450061 kubelet[2656]: E0910 00:49:13.449811 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.450092 kubelet[2656]: E0910 00:49:13.450078 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.450092 kubelet[2656]: W0910 00:49:13.450090 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.450141 kubelet[2656]: E0910 00:49:13.450101 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.450340 kubelet[2656]: E0910 00:49:13.450311 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.450340 kubelet[2656]: W0910 00:49:13.450326 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.450340 kubelet[2656]: E0910 00:49:13.450335 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.450552 kubelet[2656]: E0910 00:49:13.450530 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.450552 kubelet[2656]: W0910 00:49:13.450548 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.450638 kubelet[2656]: E0910 00:49:13.450560 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.450786 kubelet[2656]: E0910 00:49:13.450766 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.450855 kubelet[2656]: W0910 00:49:13.450805 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.450855 kubelet[2656]: E0910 00:49:13.450818 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.451095 kubelet[2656]: E0910 00:49:13.451043 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.451095 kubelet[2656]: W0910 00:49:13.451062 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.451095 kubelet[2656]: E0910 00:49:13.451074 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.451378 kubelet[2656]: E0910 00:49:13.451290 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.451378 kubelet[2656]: W0910 00:49:13.451305 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.451378 kubelet[2656]: E0910 00:49:13.451314 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.452758 kubelet[2656]: E0910 00:49:13.452503 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.452758 kubelet[2656]: W0910 00:49:13.452519 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.452758 kubelet[2656]: E0910 00:49:13.452530 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.456427 kubelet[2656]: E0910 00:49:13.455297 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.456427 kubelet[2656]: W0910 00:49:13.455324 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.456427 kubelet[2656]: E0910 00:49:13.455353 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.456427 kubelet[2656]: E0910 00:49:13.455654 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.456427 kubelet[2656]: W0910 00:49:13.455664 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.456427 kubelet[2656]: E0910 00:49:13.455675 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.456427 kubelet[2656]: E0910 00:49:13.455912 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.456427 kubelet[2656]: W0910 00:49:13.455920 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.456427 kubelet[2656]: E0910 00:49:13.455930 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.456427 kubelet[2656]: E0910 00:49:13.456165 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.456769 kubelet[2656]: W0910 00:49:13.456173 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.456769 kubelet[2656]: E0910 00:49:13.456181 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.456769 kubelet[2656]: E0910 00:49:13.456494 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.456769 kubelet[2656]: W0910 00:49:13.456515 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.456769 kubelet[2656]: E0910 00:49:13.456542 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.456891 kubelet[2656]: E0910 00:49:13.456840 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.456891 kubelet[2656]: W0910 00:49:13.456852 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.456891 kubelet[2656]: E0910 00:49:13.456868 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.457545 kubelet[2656]: E0910 00:49:13.457138 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.457545 kubelet[2656]: W0910 00:49:13.457160 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.457545 kubelet[2656]: E0910 00:49:13.457174 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.483273 kubelet[2656]: E0910 00:49:13.481801 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.483273 kubelet[2656]: W0910 00:49:13.481869 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.483273 kubelet[2656]: E0910 00:49:13.481925 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.483273 kubelet[2656]: E0910 00:49:13.482548 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.483273 kubelet[2656]: W0910 00:49:13.482561 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.483273 kubelet[2656]: E0910 00:49:13.482619 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.483273 kubelet[2656]: E0910 00:49:13.483040 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.483273 kubelet[2656]: W0910 00:49:13.483063 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.483273 kubelet[2656]: E0910 00:49:13.483086 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.483891 kubelet[2656]: E0910 00:49:13.483609 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.483891 kubelet[2656]: W0910 00:49:13.483643 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.483891 kubelet[2656]: E0910 00:49:13.483678 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.483983 kubelet[2656]: E0910 00:49:13.483969 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.483983 kubelet[2656]: W0910 00:49:13.483980 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.484030 kubelet[2656]: E0910 00:49:13.484016 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.484370 kubelet[2656]: E0910 00:49:13.484336 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.484370 kubelet[2656]: W0910 00:49:13.484351 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.484490 kubelet[2656]: E0910 00:49:13.484392 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.485215 kubelet[2656]: E0910 00:49:13.484620 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.485215 kubelet[2656]: W0910 00:49:13.484633 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.485215 kubelet[2656]: E0910 00:49:13.484670 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.485215 kubelet[2656]: E0910 00:49:13.484890 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.485215 kubelet[2656]: W0910 00:49:13.484900 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.485215 kubelet[2656]: E0910 00:49:13.484918 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.485215 kubelet[2656]: E0910 00:49:13.485205 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.485215 kubelet[2656]: W0910 00:49:13.485217 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.485455 kubelet[2656]: E0910 00:49:13.485235 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.485902 kubelet[2656]: E0910 00:49:13.485678 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.485902 kubelet[2656]: W0910 00:49:13.485700 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.485902 kubelet[2656]: E0910 00:49:13.485724 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.486168 kubelet[2656]: E0910 00:49:13.485984 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.486168 kubelet[2656]: W0910 00:49:13.486003 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.486168 kubelet[2656]: E0910 00:49:13.486039 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.486333 kubelet[2656]: E0910 00:49:13.486316 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.486368 kubelet[2656]: W0910 00:49:13.486332 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.486518 kubelet[2656]: E0910 00:49:13.486474 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.486586 kubelet[2656]: E0910 00:49:13.486568 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.486586 kubelet[2656]: W0910 00:49:13.486580 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.486640 kubelet[2656]: E0910 00:49:13.486605 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.486951 kubelet[2656]: E0910 00:49:13.486910 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.486951 kubelet[2656]: W0910 00:49:13.486924 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.487085 kubelet[2656]: E0910 00:49:13.486955 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.487500 kubelet[2656]: E0910 00:49:13.487482 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.487500 kubelet[2656]: W0910 00:49:13.487497 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.487627 kubelet[2656]: E0910 00:49:13.487510 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.488050 kubelet[2656]: E0910 00:49:13.488023 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.488050 kubelet[2656]: W0910 00:49:13.488040 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.488138 kubelet[2656]: E0910 00:49:13.488069 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.488625 kubelet[2656]: E0910 00:49:13.488607 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.488625 kubelet[2656]: W0910 00:49:13.488623 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.488693 kubelet[2656]: E0910 00:49:13.488642 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.489051 kubelet[2656]: E0910 00:49:13.489024 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:13.489051 kubelet[2656]: W0910 00:49:13.489041 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:13.489125 kubelet[2656]: E0910 00:49:13.489065 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:13.718078 kubelet[2656]: I0910 00:49:13.717927 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-699ddbdfdc-m4l6d" podStartSLOduration=1.9105821939999998 podStartE2EDuration="5.717909508s" podCreationTimestamp="2025-09-10 00:49:08 +0000 UTC" firstStartedPulling="2025-09-10 00:49:09.148259246 +0000 UTC m=+20.121149467" lastFinishedPulling="2025-09-10 00:49:12.95558656 +0000 UTC m=+23.928476781" observedRunningTime="2025-09-10 00:49:13.714416572 +0000 UTC m=+24.687306793" watchObservedRunningTime="2025-09-10 00:49:13.717909508 +0000 UTC m=+24.690799729" Sep 10 00:49:14.445070 kubelet[2656]: I0910 00:49:14.445035 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:49:14.445725 kubelet[2656]: E0910 00:49:14.445434 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:14.465256 kubelet[2656]: E0910 00:49:14.465170 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.465256 kubelet[2656]: W0910 00:49:14.465197 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.465256 kubelet[2656]: E0910 00:49:14.465231 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.465592 kubelet[2656]: E0910 00:49:14.465546 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.465592 kubelet[2656]: W0910 00:49:14.465563 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.465592 kubelet[2656]: E0910 00:49:14.465575 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.465828 kubelet[2656]: E0910 00:49:14.465799 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.465828 kubelet[2656]: W0910 00:49:14.465810 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.465828 kubelet[2656]: E0910 00:49:14.465818 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.466055 kubelet[2656]: E0910 00:49:14.466028 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.466055 kubelet[2656]: W0910 00:49:14.466040 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.466055 kubelet[2656]: E0910 00:49:14.466050 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.466303 kubelet[2656]: E0910 00:49:14.466285 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.466303 kubelet[2656]: W0910 00:49:14.466297 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.466366 kubelet[2656]: E0910 00:49:14.466305 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.466526 kubelet[2656]: E0910 00:49:14.466509 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.466526 kubelet[2656]: W0910 00:49:14.466520 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.466578 kubelet[2656]: E0910 00:49:14.466529 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.466786 kubelet[2656]: E0910 00:49:14.466765 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.466786 kubelet[2656]: W0910 00:49:14.466777 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.466786 kubelet[2656]: E0910 00:49:14.466785 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.467016 kubelet[2656]: E0910 00:49:14.466995 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.467016 kubelet[2656]: W0910 00:49:14.467006 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.467016 kubelet[2656]: E0910 00:49:14.467014 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.467568 kubelet[2656]: E0910 00:49:14.467391 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.467568 kubelet[2656]: W0910 00:49:14.467421 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.467568 kubelet[2656]: E0910 00:49:14.467464 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.467941 kubelet[2656]: E0910 00:49:14.467903 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.467941 kubelet[2656]: W0910 00:49:14.467921 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.467941 kubelet[2656]: E0910 00:49:14.467934 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.468310 kubelet[2656]: E0910 00:49:14.468279 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.468387 kubelet[2656]: W0910 00:49:14.468307 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.468616 kubelet[2656]: E0910 00:49:14.468339 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.468928 kubelet[2656]: E0910 00:49:14.468912 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.468928 kubelet[2656]: W0910 00:49:14.468925 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.469015 kubelet[2656]: E0910 00:49:14.468936 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.469181 kubelet[2656]: E0910 00:49:14.469164 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.469181 kubelet[2656]: W0910 00:49:14.469176 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.469301 kubelet[2656]: E0910 00:49:14.469186 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.469519 kubelet[2656]: E0910 00:49:14.469491 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.469519 kubelet[2656]: W0910 00:49:14.469506 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.469519 kubelet[2656]: E0910 00:49:14.469516 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.469868 kubelet[2656]: E0910 00:49:14.469845 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.469929 kubelet[2656]: W0910 00:49:14.469867 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.469929 kubelet[2656]: E0910 00:49:14.469895 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.488072 kubelet[2656]: E0910 00:49:14.488029 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.488072 kubelet[2656]: W0910 00:49:14.488055 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.488072 kubelet[2656]: E0910 00:49:14.488078 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.488451 kubelet[2656]: E0910 00:49:14.488425 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.488451 kubelet[2656]: W0910 00:49:14.488439 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.488451 kubelet[2656]: E0910 00:49:14.488454 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.488812 kubelet[2656]: E0910 00:49:14.488770 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.488812 kubelet[2656]: W0910 00:49:14.488790 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.488812 kubelet[2656]: E0910 00:49:14.488809 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.489187 kubelet[2656]: E0910 00:49:14.489153 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.489187 kubelet[2656]: W0910 00:49:14.489180 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.489336 kubelet[2656]: E0910 00:49:14.489224 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.489493 kubelet[2656]: E0910 00:49:14.489472 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.489493 kubelet[2656]: W0910 00:49:14.489485 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.489589 kubelet[2656]: E0910 00:49:14.489499 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.489789 kubelet[2656]: E0910 00:49:14.489769 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.489789 kubelet[2656]: W0910 00:49:14.489780 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.489890 kubelet[2656]: E0910 00:49:14.489827 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.489993 kubelet[2656]: E0910 00:49:14.489973 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.489993 kubelet[2656]: W0910 00:49:14.489985 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.490063 kubelet[2656]: E0910 00:49:14.490009 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.490217 kubelet[2656]: E0910 00:49:14.490191 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.490217 kubelet[2656]: W0910 00:49:14.490202 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.490312 kubelet[2656]: E0910 00:49:14.490266 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.490479 kubelet[2656]: E0910 00:49:14.490463 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.490512 kubelet[2656]: W0910 00:49:14.490475 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.490512 kubelet[2656]: E0910 00:49:14.490501 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.490838 kubelet[2656]: E0910 00:49:14.490802 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.490838 kubelet[2656]: W0910 00:49:14.490821 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.490838 kubelet[2656]: E0910 00:49:14.490838 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.491045 kubelet[2656]: E0910 00:49:14.491034 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.491045 kubelet[2656]: W0910 00:49:14.491042 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.491101 kubelet[2656]: E0910 00:49:14.491055 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.491408 kubelet[2656]: E0910 00:49:14.491376 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.491464 kubelet[2656]: W0910 00:49:14.491409 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.491464 kubelet[2656]: E0910 00:49:14.491449 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.491725 kubelet[2656]: E0910 00:49:14.491706 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.491725 kubelet[2656]: W0910 00:49:14.491718 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.491790 kubelet[2656]: E0910 00:49:14.491732 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.492012 kubelet[2656]: E0910 00:49:14.491987 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.492012 kubelet[2656]: W0910 00:49:14.492000 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.492077 kubelet[2656]: E0910 00:49:14.492028 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.492222 kubelet[2656]: E0910 00:49:14.492194 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.492222 kubelet[2656]: W0910 00:49:14.492206 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.492302 kubelet[2656]: E0910 00:49:14.492232 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.492467 kubelet[2656]: E0910 00:49:14.492451 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.492467 kubelet[2656]: W0910 00:49:14.492463 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.492521 kubelet[2656]: E0910 00:49:14.492490 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.492682 kubelet[2656]: E0910 00:49:14.492667 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.492682 kubelet[2656]: W0910 00:49:14.492678 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.492728 kubelet[2656]: E0910 00:49:14.492689 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.492981 kubelet[2656]: E0910 00:49:14.492957 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:49:14.492981 kubelet[2656]: W0910 00:49:14.492970 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:49:14.492981 kubelet[2656]: E0910 00:49:14.492981 2656 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:49:14.747162 containerd[1572]: time="2025-09-10T00:49:14.746991035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:14.748806 containerd[1572]: time="2025-09-10T00:49:14.748599148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 10 00:49:14.751328 containerd[1572]: time="2025-09-10T00:49:14.751228959Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:14.754535 containerd[1572]: time="2025-09-10T00:49:14.753657518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:14.754535 containerd[1572]: time="2025-09-10T00:49:14.754395303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.79845465s" Sep 10 00:49:14.754535 containerd[1572]: time="2025-09-10T00:49:14.754427007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 10 00:49:14.756932 containerd[1572]: time="2025-09-10T00:49:14.756900749Z" level=info msg="CreateContainer within sandbox \"ae3ae09167731d246ab503fe3a5159f25c8d0260bdc61561df13f29d2e9320d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:49:14.772960 containerd[1572]: time="2025-09-10T00:49:14.772898937Z" level=info msg="CreateContainer within sandbox \"ae3ae09167731d246ab503fe3a5159f25c8d0260bdc61561df13f29d2e9320d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e234bada8149a6698c13640ca39a09e3b6dc957fe2236a060b022f7a6e8dbd5b\"" Sep 10 00:49:14.773598 containerd[1572]: time="2025-09-10T00:49:14.773562829Z" level=info msg="StartContainer for \"e234bada8149a6698c13640ca39a09e3b6dc957fe2236a060b022f7a6e8dbd5b\"" Sep 10 00:49:14.859318 containerd[1572]: time="2025-09-10T00:49:14.859217772Z" level=info msg="StartContainer for \"e234bada8149a6698c13640ca39a09e3b6dc957fe2236a060b022f7a6e8dbd5b\" returns successfully" Sep 10 00:49:14.911490 containerd[1572]: time="2025-09-10T00:49:14.909614589Z" level=info msg="shim disconnected" id=e234bada8149a6698c13640ca39a09e3b6dc957fe2236a060b022f7a6e8dbd5b namespace=k8s.io Sep 10 00:49:14.911490 containerd[1572]: time="2025-09-10T00:49:14.911485232Z" level=warning msg="cleaning up after shim disconnected" id=e234bada8149a6698c13640ca39a09e3b6dc957fe2236a060b022f7a6e8dbd5b namespace=k8s.io Sep 10 00:49:14.911490 containerd[1572]: time="2025-09-10T00:49:14.911498589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:49:15.116847 kubelet[2656]: E0910 00:49:15.116783 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:15.448678 kubelet[2656]: E0910 00:49:15.447953 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:15.454301 containerd[1572]: time="2025-09-10T00:49:15.453326475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:49:15.769695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e234bada8149a6698c13640ca39a09e3b6dc957fe2236a060b022f7a6e8dbd5b-rootfs.mount: Deactivated successfully. Sep 10 00:49:16.449937 kubelet[2656]: E0910 00:49:16.449886 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:17.117977 kubelet[2656]: E0910 00:49:17.117908 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:18.400733 containerd[1572]: time="2025-09-10T00:49:18.400677991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:18.401562 containerd[1572]: time="2025-09-10T00:49:18.401466874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 10 00:49:18.402802 containerd[1572]: time="2025-09-10T00:49:18.402730923Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:18.405132 containerd[1572]: time="2025-09-10T00:49:18.405086830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:18.405678 containerd[1572]: time="2025-09-10T00:49:18.405645275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.952271693s" Sep 10 00:49:18.405716 containerd[1572]: time="2025-09-10T00:49:18.405677420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 10 00:49:18.407702 containerd[1572]: time="2025-09-10T00:49:18.407594946Z" level=info msg="CreateContainer within sandbox \"ae3ae09167731d246ab503fe3a5159f25c8d0260bdc61561df13f29d2e9320d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:49:18.425926 containerd[1572]: time="2025-09-10T00:49:18.425863941Z" level=info msg="CreateContainer within sandbox \"ae3ae09167731d246ab503fe3a5159f25c8d0260bdc61561df13f29d2e9320d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b93f7209e32d3ad133d90221cd262b908f03acde10a6ed2cb7c22279d074b8a3\"" Sep 10 00:49:18.426619 containerd[1572]: time="2025-09-10T00:49:18.426549404Z" level=info msg="StartContainer for \"b93f7209e32d3ad133d90221cd262b908f03acde10a6ed2cb7c22279d074b8a3\"" Sep 10 00:49:19.931498 kubelet[2656]: E0910 00:49:19.931196 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:20.494817 containerd[1572]: time="2025-09-10T00:49:20.494760320Z" level=info msg="StartContainer for \"b93f7209e32d3ad133d90221cd262b908f03acde10a6ed2cb7c22279d074b8a3\" returns successfully" Sep 10 00:49:20.495367 kubelet[2656]: E0910 00:49:20.495156 2656 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.378s" Sep 10 00:49:20.888351 systemd-journald[1158]: Under memory pressure, flushing caches. Sep 10 00:49:20.866364 systemd-resolved[1461]: Under memory pressure, flushing caches. Sep 10 00:49:20.866412 systemd-resolved[1461]: Flushed all caches. Sep 10 00:49:21.564798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b93f7209e32d3ad133d90221cd262b908f03acde10a6ed2cb7c22279d074b8a3-rootfs.mount: Deactivated successfully. Sep 10 00:49:21.568625 containerd[1572]: time="2025-09-10T00:49:21.568560970Z" level=info msg="shim disconnected" id=b93f7209e32d3ad133d90221cd262b908f03acde10a6ed2cb7c22279d074b8a3 namespace=k8s.io Sep 10 00:49:21.569010 containerd[1572]: time="2025-09-10T00:49:21.568624939Z" level=warning msg="cleaning up after shim disconnected" id=b93f7209e32d3ad133d90221cd262b908f03acde10a6ed2cb7c22279d074b8a3 namespace=k8s.io Sep 10 00:49:21.569010 containerd[1572]: time="2025-09-10T00:49:21.568636363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:49:21.623748 kubelet[2656]: I0910 00:49:21.623697 2656 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:49:21.837761 kubelet[2656]: I0910 00:49:21.837591 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05560b5f-0c2f-48f7-b5ef-fa1beda53e1d-tigera-ca-bundle\") pod \"calico-kube-controllers-7b4cc87785-mvnvv\" (UID: \"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d\") " pod="calico-system/calico-kube-controllers-7b4cc87785-mvnvv" Sep 10 00:49:21.837761 kubelet[2656]: I0910 00:49:21.837653 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/27842233-86b6-4b4f-9f34-c14a660239b3-goldmane-key-pair\") pod \"goldmane-7988f88666-cvrzx\" (UID: \"27842233-86b6-4b4f-9f34-c14a660239b3\") " pod="calico-system/goldmane-7988f88666-cvrzx" Sep 10 00:49:21.837761 kubelet[2656]: I0910 00:49:21.837669 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phdds\" (UniqueName: \"kubernetes.io/projected/a6d2bfa2-402e-4e85-8163-cc4a99a0ed75-kube-api-access-phdds\") pod \"coredns-7c65d6cfc9-mmgft\" (UID: \"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75\") " pod="kube-system/coredns-7c65d6cfc9-mmgft" Sep 10 00:49:21.837761 kubelet[2656]: I0910 00:49:21.837685 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9w5\" (UniqueName: \"kubernetes.io/projected/d173b8ad-d1e2-4ab1-adf4-074e01bc5a59-kube-api-access-4v9w5\") pod \"calico-apiserver-6d4bb6c97d-crcn2\" (UID: \"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59\") " pod="calico-apiserver/calico-apiserver-6d4bb6c97d-crcn2" Sep 10 00:49:21.837761 kubelet[2656]: I0910 00:49:21.837703 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mq57\" (UniqueName: \"kubernetes.io/projected/05560b5f-0c2f-48f7-b5ef-fa1beda53e1d-kube-api-access-5mq57\") pod \"calico-kube-controllers-7b4cc87785-mvnvv\" (UID: \"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d\") " pod="calico-system/calico-kube-controllers-7b4cc87785-mvnvv" Sep 10 00:49:21.838198 kubelet[2656]: I0910 00:49:21.837719 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvnh\" (UniqueName: \"kubernetes.io/projected/27842233-86b6-4b4f-9f34-c14a660239b3-kube-api-access-6kvnh\") pod \"goldmane-7988f88666-cvrzx\" (UID: \"27842233-86b6-4b4f-9f34-c14a660239b3\") " pod="calico-system/goldmane-7988f88666-cvrzx" Sep 10 00:49:21.838198 kubelet[2656]: I0910 00:49:21.837806 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d2bfa2-402e-4e85-8163-cc4a99a0ed75-config-volume\") pod \"coredns-7c65d6cfc9-mmgft\" (UID: \"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75\") " pod="kube-system/coredns-7c65d6cfc9-mmgft" Sep 10 00:49:21.838198 kubelet[2656]: I0910 00:49:21.837894 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27842233-86b6-4b4f-9f34-c14a660239b3-goldmane-ca-bundle\") pod \"goldmane-7988f88666-cvrzx\" (UID: \"27842233-86b6-4b4f-9f34-c14a660239b3\") " pod="calico-system/goldmane-7988f88666-cvrzx" Sep 10 00:49:21.838198 kubelet[2656]: I0910 00:49:21.837920 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-backend-key-pair\") pod \"whisker-57d545fcc-n7lfj\" (UID: \"c4b15be0-0be9-443e-b54d-5992872c55b9\") " pod="calico-system/whisker-57d545fcc-n7lfj" Sep 10 00:49:21.838198 kubelet[2656]: I0910 00:49:21.837939 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9d2\" (UniqueName: \"kubernetes.io/projected/c4b15be0-0be9-443e-b54d-5992872c55b9-kube-api-access-2b9d2\") pod \"whisker-57d545fcc-n7lfj\" (UID: \"c4b15be0-0be9-443e-b54d-5992872c55b9\") " pod="calico-system/whisker-57d545fcc-n7lfj" Sep 10 00:49:21.838382 kubelet[2656]: I0910 00:49:21.837961 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/153fa74c-658f-4fb6-a953-ee6d15f43e30-config-volume\") pod \"coredns-7c65d6cfc9-7hl4n\" (UID: \"153fa74c-658f-4fb6-a953-ee6d15f43e30\") " pod="kube-system/coredns-7c65d6cfc9-7hl4n" Sep 10 00:49:21.838382 kubelet[2656]: I0910 00:49:21.837984 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d173b8ad-d1e2-4ab1-adf4-074e01bc5a59-calico-apiserver-certs\") pod \"calico-apiserver-6d4bb6c97d-crcn2\" (UID: \"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59\") " pod="calico-apiserver/calico-apiserver-6d4bb6c97d-crcn2" Sep 10 00:49:21.838382 kubelet[2656]: I0910 00:49:21.838007 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-ca-bundle\") pod \"whisker-57d545fcc-n7lfj\" (UID: \"c4b15be0-0be9-443e-b54d-5992872c55b9\") " pod="calico-system/whisker-57d545fcc-n7lfj" Sep 10 00:49:21.838382 kubelet[2656]: I0910 00:49:21.838028 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6jn\" (UniqueName: \"kubernetes.io/projected/84107d46-758d-49e3-9857-f23097a54b8f-kube-api-access-mf6jn\") pod \"calico-apiserver-6d4bb6c97d-rwxfm\" (UID: \"84107d46-758d-49e3-9857-f23097a54b8f\") " pod="calico-apiserver/calico-apiserver-6d4bb6c97d-rwxfm" Sep 10 00:49:21.838382 kubelet[2656]: I0910 00:49:21.838051 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84107d46-758d-49e3-9857-f23097a54b8f-calico-apiserver-certs\") pod \"calico-apiserver-6d4bb6c97d-rwxfm\" (UID: \"84107d46-758d-49e3-9857-f23097a54b8f\") " pod="calico-apiserver/calico-apiserver-6d4bb6c97d-rwxfm" Sep 10 00:49:21.838508 kubelet[2656]: I0910 00:49:21.838097 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27842233-86b6-4b4f-9f34-c14a660239b3-config\") pod \"goldmane-7988f88666-cvrzx\" (UID: \"27842233-86b6-4b4f-9f34-c14a660239b3\") " pod="calico-system/goldmane-7988f88666-cvrzx" Sep 10 00:49:21.838508 kubelet[2656]: I0910 00:49:21.838118 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8lm\" (UniqueName: \"kubernetes.io/projected/153fa74c-658f-4fb6-a953-ee6d15f43e30-kube-api-access-5m8lm\") pod \"coredns-7c65d6cfc9-7hl4n\" (UID: \"153fa74c-658f-4fb6-a953-ee6d15f43e30\") " pod="kube-system/coredns-7c65d6cfc9-7hl4n" Sep 10 00:49:21.967050 containerd[1572]: time="2025-09-10T00:49:21.964822961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d545fcc-n7lfj,Uid:c4b15be0-0be9-443e-b54d-5992872c55b9,Namespace:calico-system,Attempt:0,}" Sep 10 00:49:21.967050 containerd[1572]: time="2025-09-10T00:49:21.965980238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mmgft,Uid:a6d2bfa2-402e-4e85-8163-cc4a99a0ed75,Namespace:kube-system,Attempt:0,}" Sep 10 00:49:21.971370 kubelet[2656]: E0910 00:49:21.965395 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:21.971661 containerd[1572]: time="2025-09-10T00:49:21.971576406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-crcn2,Uid:d173b8ad-d1e2-4ab1-adf4-074e01bc5a59,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:49:21.971946 containerd[1572]: time="2025-09-10T00:49:21.971918939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4cc87785-mvnvv,Uid:05560b5f-0c2f-48f7-b5ef-fa1beda53e1d,Namespace:calico-system,Attempt:0,}" Sep 10 00:49:22.125290 containerd[1572]: time="2025-09-10T00:49:22.124193263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdgls,Uid:0dcf04d3-5b02-40dc-800c-0985ae063919,Namespace:calico-system,Attempt:0,}" Sep 10 00:49:22.144807 containerd[1572]: time="2025-09-10T00:49:22.144757336Z" level=error msg="Failed to destroy network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.145521 containerd[1572]: time="2025-09-10T00:49:22.145469411Z" level=error msg="encountered an error cleaning up failed sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.145521 containerd[1572]: time="2025-09-10T00:49:22.145526816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d545fcc-n7lfj,Uid:c4b15be0-0be9-443e-b54d-5992872c55b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.146064 containerd[1572]: time="2025-09-10T00:49:22.146018127Z" level=error msg="Failed to destroy network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.147710 containerd[1572]: time="2025-09-10T00:49:22.147636308Z" level=error msg="encountered an error cleaning up failed sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.147789 containerd[1572]: time="2025-09-10T00:49:22.147728704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mmgft,Uid:a6d2bfa2-402e-4e85-8163-cc4a99a0ed75,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.147910 containerd[1572]: time="2025-09-10T00:49:22.147882764Z" level=error msg="Failed to destroy network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.148262 containerd[1572]: time="2025-09-10T00:49:22.148193140Z" level=error msg="encountered an error cleaning up failed sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.148262 containerd[1572]: time="2025-09-10T00:49:22.148253892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-crcn2,Uid:d173b8ad-d1e2-4ab1-adf4-074e01bc5a59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.153941 containerd[1572]: time="2025-09-10T00:49:22.153870111Z" level=error msg="Failed to destroy network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.154592 containerd[1572]: time="2025-09-10T00:49:22.154559279Z" level=error msg="encountered an error cleaning up failed sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.154681 containerd[1572]: time="2025-09-10T00:49:22.154650403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4cc87785-mvnvv,Uid:05560b5f-0c2f-48f7-b5ef-fa1beda53e1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.160058 kubelet[2656]: E0910 00:49:22.159959 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.160058 kubelet[2656]: E0910 00:49:22.159997 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.160058 kubelet[2656]: E0910 00:49:22.159981 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.160058 kubelet[2656]: E0910 00:49:22.160038 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4cc87785-mvnvv" Sep 10 00:49:22.160280 kubelet[2656]: E0910 00:49:22.160061 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b4cc87785-mvnvv" Sep 10 00:49:22.160280 kubelet[2656]: E0910 00:49:22.160076 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d545fcc-n7lfj" Sep 10 00:49:22.160280 kubelet[2656]: E0910 00:49:22.160102 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d545fcc-n7lfj" Sep 10 00:49:22.160429 kubelet[2656]: E0910 00:49:22.160107 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b4cc87785-mvnvv_calico-system(05560b5f-0c2f-48f7-b5ef-fa1beda53e1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b4cc87785-mvnvv_calico-system(05560b5f-0c2f-48f7-b5ef-fa1beda53e1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4cc87785-mvnvv" podUID="05560b5f-0c2f-48f7-b5ef-fa1beda53e1d" Sep 10 00:49:22.160429 kubelet[2656]: E0910 00:49:22.160154 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d545fcc-n7lfj_calico-system(c4b15be0-0be9-443e-b54d-5992872c55b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d545fcc-n7lfj_calico-system(c4b15be0-0be9-443e-b54d-5992872c55b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d545fcc-n7lfj" podUID="c4b15be0-0be9-443e-b54d-5992872c55b9" Sep 10 00:49:22.160429 kubelet[2656]: E0910 00:49:22.159959 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.160559 kubelet[2656]: E0910 00:49:22.160217 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mmgft" Sep 10 00:49:22.160559 kubelet[2656]: E0910 00:49:22.160235 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mmgft" Sep 10 00:49:22.160559 kubelet[2656]: E0910 00:49:22.160285 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mmgft_kube-system(a6d2bfa2-402e-4e85-8163-cc4a99a0ed75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mmgft_kube-system(a6d2bfa2-402e-4e85-8163-cc4a99a0ed75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mmgft" podUID="a6d2bfa2-402e-4e85-8163-cc4a99a0ed75" Sep 10 00:49:22.160650 kubelet[2656]: E0910 00:49:22.160038 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-crcn2" Sep 10 00:49:22.160650 kubelet[2656]: E0910 00:49:22.160319 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-crcn2" Sep 10 00:49:22.160650 kubelet[2656]: E0910 00:49:22.160378 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4bb6c97d-crcn2_calico-apiserver(d173b8ad-d1e2-4ab1-adf4-074e01bc5a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4bb6c97d-crcn2_calico-apiserver(d173b8ad-d1e2-4ab1-adf4-074e01bc5a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-crcn2" podUID="d173b8ad-d1e2-4ab1-adf4-074e01bc5a59" Sep 10 00:49:22.199483 containerd[1572]: time="2025-09-10T00:49:22.199408444Z" level=error msg="Failed to destroy network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.199895 containerd[1572]: time="2025-09-10T00:49:22.199861928Z" level=error msg="encountered an error cleaning up failed sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.199930 containerd[1572]: time="2025-09-10T00:49:22.199913021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdgls,Uid:0dcf04d3-5b02-40dc-800c-0985ae063919,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.200218 kubelet[2656]: E0910 00:49:22.200161 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.200297 kubelet[2656]: E0910 00:49:22.200261 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wdgls" Sep 10 00:49:22.200297 kubelet[2656]: E0910 00:49:22.200288 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wdgls" Sep 10 00:49:22.200358 kubelet[2656]: E0910 00:49:22.200333 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wdgls_calico-system(0dcf04d3-5b02-40dc-800c-0985ae063919)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wdgls_calico-system(0dcf04d3-5b02-40dc-800c-0985ae063919)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:22.254153 kubelet[2656]: E0910 00:49:22.254113 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:22.254883 containerd[1572]: time="2025-09-10T00:49:22.254734532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7hl4n,Uid:153fa74c-658f-4fb6-a953-ee6d15f43e30,Namespace:kube-system,Attempt:0,}" Sep 10 00:49:22.258884 containerd[1572]: time="2025-09-10T00:49:22.258853072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-rwxfm,Uid:84107d46-758d-49e3-9857-f23097a54b8f,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:49:22.259331 containerd[1572]: time="2025-09-10T00:49:22.259284662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cvrzx,Uid:27842233-86b6-4b4f-9f34-c14a660239b3,Namespace:calico-system,Attempt:0,}" Sep 10 00:49:22.345757 containerd[1572]: time="2025-09-10T00:49:22.345607437Z" level=error msg="Failed to destroy network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.346085 containerd[1572]: time="2025-09-10T00:49:22.345997344Z" level=error msg="encountered an error cleaning up failed sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.346085 containerd[1572]: time="2025-09-10T00:49:22.346047174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-rwxfm,Uid:84107d46-758d-49e3-9857-f23097a54b8f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.346327 kubelet[2656]: E0910 00:49:22.346281 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.346404 kubelet[2656]: E0910 00:49:22.346350 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-rwxfm" Sep 10 00:49:22.346404 kubelet[2656]: E0910 00:49:22.346371 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-rwxfm" Sep 10 00:49:22.347261 kubelet[2656]: E0910 00:49:22.346597 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4bb6c97d-rwxfm_calico-apiserver(84107d46-758d-49e3-9857-f23097a54b8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4bb6c97d-rwxfm_calico-apiserver(84107d46-758d-49e3-9857-f23097a54b8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-rwxfm" podUID="84107d46-758d-49e3-9857-f23097a54b8f" Sep 10 00:49:22.348305 containerd[1572]: time="2025-09-10T00:49:22.348262308Z" level=error msg="Failed to destroy network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.348926 containerd[1572]: time="2025-09-10T00:49:22.348892207Z" level=error msg="encountered an error cleaning up failed sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.348972 containerd[1572]: time="2025-09-10T00:49:22.348949894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cvrzx,Uid:27842233-86b6-4b4f-9f34-c14a660239b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.349217 kubelet[2656]: E0910 00:49:22.349173 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.349324 kubelet[2656]: E0910 00:49:22.349232 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-cvrzx" Sep 10 00:49:22.349324 kubelet[2656]: E0910 00:49:22.349310 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-cvrzx" Sep 10 00:49:22.349426 kubelet[2656]: E0910 00:49:22.349391 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-cvrzx_calico-system(27842233-86b6-4b4f-9f34-c14a660239b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-cvrzx_calico-system(27842233-86b6-4b4f-9f34-c14a660239b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-cvrzx" podUID="27842233-86b6-4b4f-9f34-c14a660239b3" Sep 10 00:49:22.359036 containerd[1572]: time="2025-09-10T00:49:22.358992632Z" level=error msg="Failed to destroy network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.359446 containerd[1572]: time="2025-09-10T00:49:22.359412648Z" level=error msg="encountered an error cleaning up failed sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.359486 containerd[1572]: time="2025-09-10T00:49:22.359461818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7hl4n,Uid:153fa74c-658f-4fb6-a953-ee6d15f43e30,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.359714 kubelet[2656]: E0910 00:49:22.359678 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.359762 kubelet[2656]: E0910 00:49:22.359746 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7hl4n" Sep 10 00:49:22.359790 kubelet[2656]: E0910 00:49:22.359769 2656 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7hl4n" Sep 10 00:49:22.359838 kubelet[2656]: E0910 00:49:22.359815 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7hl4n_kube-system(153fa74c-658f-4fb6-a953-ee6d15f43e30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7hl4n_kube-system(153fa74c-658f-4fb6-a953-ee6d15f43e30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7hl4n" podUID="153fa74c-658f-4fb6-a953-ee6d15f43e30" Sep 10 00:49:22.507737 kubelet[2656]: I0910 00:49:22.507608 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:22.511621 kubelet[2656]: I0910 00:49:22.511573 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:22.512683 containerd[1572]: time="2025-09-10T00:49:22.512648334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:49:22.512995 kubelet[2656]: I0910 00:49:22.512960 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:22.537282 containerd[1572]: time="2025-09-10T00:49:22.536372104Z" level=info msg="StopPodSandbox for \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\"" Sep 10 00:49:22.537282 containerd[1572]: time="2025-09-10T00:49:22.536484169Z" level=info msg="StopPodSandbox for \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\"" Sep 10 00:49:22.537282 containerd[1572]: time="2025-09-10T00:49:22.537204571Z" level=info msg="StopPodSandbox for \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\"" Sep 10 00:49:22.537487 kubelet[2656]: I0910 00:49:22.536771 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:22.538271 containerd[1572]: time="2025-09-10T00:49:22.538213565Z" level=info msg="Ensure that sandbox 572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67 in task-service has been cleanup successfully" Sep 10 00:49:22.538336 containerd[1572]: time="2025-09-10T00:49:22.538236741Z" level=info msg="Ensure that sandbox cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49 in task-service has been cleanup successfully" Sep 10 00:49:22.538370 containerd[1572]: time="2025-09-10T00:49:22.538224787Z" level=info msg="Ensure that sandbox 0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2 in task-service has been cleanup successfully" Sep 10 00:49:22.538394 containerd[1572]: time="2025-09-10T00:49:22.538376453Z" level=info msg="StopPodSandbox for \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\"" Sep 10 00:49:22.538558 containerd[1572]: time="2025-09-10T00:49:22.538516966Z" level=info msg="Ensure that sandbox 43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd in task-service has been cleanup successfully" Sep 10 00:49:22.548306 kubelet[2656]: I0910 00:49:22.546378 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:22.549096 containerd[1572]: time="2025-09-10T00:49:22.549057599Z" level=info msg="StopPodSandbox for \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\"" Sep 10 00:49:22.550007 containerd[1572]: time="2025-09-10T00:49:22.549263834Z" level=info msg="Ensure that sandbox 4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee in task-service has been cleanup successfully" Sep 10 00:49:22.552081 kubelet[2656]: I0910 00:49:22.551478 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:22.552631 containerd[1572]: time="2025-09-10T00:49:22.552504765Z" level=info msg="StopPodSandbox for \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\"" Sep 10 00:49:22.552734 containerd[1572]: time="2025-09-10T00:49:22.552707735Z" level=info msg="Ensure that sandbox 29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c in task-service has been cleanup successfully" Sep 10 00:49:22.553461 kubelet[2656]: I0910 00:49:22.553420 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:22.554112 containerd[1572]: time="2025-09-10T00:49:22.554081122Z" level=info msg="StopPodSandbox for \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\"" Sep 10 00:49:22.555136 containerd[1572]: time="2025-09-10T00:49:22.555103383Z" level=info msg="Ensure that sandbox 7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12 in task-service has been cleanup successfully" Sep 10 00:49:22.557808 kubelet[2656]: I0910 00:49:22.557772 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:22.559345 containerd[1572]: time="2025-09-10T00:49:22.559165649Z" level=info msg="StopPodSandbox for \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\"" Sep 10 00:49:22.559857 containerd[1572]: time="2025-09-10T00:49:22.559791641Z" level=info msg="Ensure that sandbox d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503 in task-service has been cleanup successfully" Sep 10 00:49:22.618617 containerd[1572]: time="2025-09-10T00:49:22.618409262Z" level=error msg="StopPodSandbox for \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\" failed" error="failed to destroy network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.620182 kubelet[2656]: E0910 00:49:22.619975 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:22.620182 kubelet[2656]: E0910 00:49:22.620053 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503"} Sep 10 00:49:22.620182 kubelet[2656]: E0910 00:49:22.620130 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"153fa74c-658f-4fb6-a953-ee6d15f43e30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.620182 kubelet[2656]: E0910 00:49:22.620162 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"153fa74c-658f-4fb6-a953-ee6d15f43e30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7hl4n" podUID="153fa74c-658f-4fb6-a953-ee6d15f43e30" Sep 10 00:49:22.635017 containerd[1572]: time="2025-09-10T00:49:22.634551223Z" level=error msg="StopPodSandbox for \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\" failed" error="failed to destroy network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.635501 kubelet[2656]: E0910 00:49:22.635146 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:22.635501 kubelet[2656]: E0910 00:49:22.635234 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd"} Sep 10 00:49:22.635501 kubelet[2656]: E0910 00:49:22.635310 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84107d46-758d-49e3-9857-f23097a54b8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.635501 kubelet[2656]: E0910 00:49:22.635345 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84107d46-758d-49e3-9857-f23097a54b8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-rwxfm" podUID="84107d46-758d-49e3-9857-f23097a54b8f" Sep 10 00:49:22.636162 containerd[1572]: time="2025-09-10T00:49:22.635366786Z" level=error msg="StopPodSandbox for \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\" failed" error="failed to destroy network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.636209 kubelet[2656]: E0910 00:49:22.635520 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:22.636209 kubelet[2656]: E0910 00:49:22.635550 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee"} Sep 10 00:49:22.636209 kubelet[2656]: E0910 00:49:22.635576 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4b15be0-0be9-443e-b54d-5992872c55b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.636209 kubelet[2656]: E0910 00:49:22.635602 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4b15be0-0be9-443e-b54d-5992872c55b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d545fcc-n7lfj" podUID="c4b15be0-0be9-443e-b54d-5992872c55b9" Sep 10 00:49:22.637144 containerd[1572]: time="2025-09-10T00:49:22.637053035Z" level=error msg="StopPodSandbox for \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\" failed" error="failed to destroy network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.637333 containerd[1572]: time="2025-09-10T00:49:22.637096864Z" level=error msg="StopPodSandbox for \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\" failed" error="failed to destroy network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.637461 kubelet[2656]: E0910 00:49:22.637422 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:22.637520 kubelet[2656]: E0910 00:49:22.637463 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49"} Sep 10 00:49:22.637520 kubelet[2656]: E0910 00:49:22.637494 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.637621 kubelet[2656]: E0910 00:49:22.637521 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-crcn2" podUID="d173b8ad-d1e2-4ab1-adf4-074e01bc5a59" Sep 10 00:49:22.637765 kubelet[2656]: E0910 00:49:22.637736 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:22.637818 kubelet[2656]: E0910 00:49:22.637766 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2"} Sep 10 00:49:22.637818 kubelet[2656]: E0910 00:49:22.637796 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0dcf04d3-5b02-40dc-800c-0985ae063919\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.637912 kubelet[2656]: E0910 00:49:22.637818 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0dcf04d3-5b02-40dc-800c-0985ae063919\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wdgls" podUID="0dcf04d3-5b02-40dc-800c-0985ae063919" Sep 10 00:49:22.639609 containerd[1572]: time="2025-09-10T00:49:22.639571350Z" level=error msg="StopPodSandbox for \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\" failed" error="failed to destroy network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.639811 kubelet[2656]: E0910 00:49:22.639777 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:22.639883 kubelet[2656]: E0910 00:49:22.639817 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67"} Sep 10 00:49:22.639883 kubelet[2656]: E0910 00:49:22.639847 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.639883 kubelet[2656]: E0910 00:49:22.639874 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mmgft" podUID="a6d2bfa2-402e-4e85-8163-cc4a99a0ed75" Sep 10 00:49:22.640030 containerd[1572]: time="2025-09-10T00:49:22.639990375Z" level=error msg="StopPodSandbox for \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\" failed" error="failed to destroy network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.640208 kubelet[2656]: E0910 00:49:22.640161 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:22.640208 kubelet[2656]: E0910 00:49:22.640202 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12"} Sep 10 00:49:22.640319 kubelet[2656]: E0910 00:49:22.640236 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.640387 kubelet[2656]: E0910 00:49:22.640323 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b4cc87785-mvnvv" podUID="05560b5f-0c2f-48f7-b5ef-fa1beda53e1d" Sep 10 00:49:22.642267 containerd[1572]: time="2025-09-10T00:49:22.642195148Z" level=error msg="StopPodSandbox for \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\" failed" error="failed to destroy network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:49:22.642398 kubelet[2656]: E0910 00:49:22.642356 2656 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:22.642448 kubelet[2656]: E0910 00:49:22.642400 2656 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c"} Sep 10 00:49:22.642448 kubelet[2656]: E0910 00:49:22.642424 2656 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27842233-86b6-4b4f-9f34-c14a660239b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:49:22.642542 kubelet[2656]: E0910 00:49:22.642447 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27842233-86b6-4b4f-9f34-c14a660239b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-cvrzx" podUID="27842233-86b6-4b4f-9f34-c14a660239b3" Sep 10 00:49:22.915533 systemd-resolved[1461]: Under memory pressure, flushing caches. Sep 10 00:49:22.915555 systemd-resolved[1461]: Flushed all caches. Sep 10 00:49:22.916277 systemd-journald[1158]: Under memory pressure, flushing caches. Sep 10 00:49:26.647899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295738092.mount: Deactivated successfully. Sep 10 00:49:28.622353 containerd[1572]: time="2025-09-10T00:49:28.622288327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:28.660256 containerd[1572]: time="2025-09-10T00:49:28.660166799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 10 00:49:28.693311 containerd[1572]: time="2025-09-10T00:49:28.693275517Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:28.759795 containerd[1572]: time="2025-09-10T00:49:28.759725365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:28.760746 containerd[1572]: time="2025-09-10T00:49:28.760714891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.248019303s" Sep 10 00:49:28.760812 containerd[1572]: time="2025-09-10T00:49:28.760755171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 10 00:49:28.770431 containerd[1572]: time="2025-09-10T00:49:28.770386465Z" level=info msg="CreateContainer within sandbox \"ae3ae09167731d246ab503fe3a5159f25c8d0260bdc61561df13f29d2e9320d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:49:29.041721 containerd[1572]: time="2025-09-10T00:49:29.041605167Z" level=info msg="CreateContainer within sandbox \"ae3ae09167731d246ab503fe3a5159f25c8d0260bdc61561df13f29d2e9320d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6d5ab0c5a531a9a6effb778c7b84a05f34a5c5318bbfe9d0b14cb8754c2d0520\"" Sep 10 00:49:29.042384 containerd[1572]: time="2025-09-10T00:49:29.042277127Z" level=info msg="StartContainer for \"6d5ab0c5a531a9a6effb778c7b84a05f34a5c5318bbfe9d0b14cb8754c2d0520\"" Sep 10 00:49:29.124929 containerd[1572]: time="2025-09-10T00:49:29.124881115Z" level=info msg="StartContainer for \"6d5ab0c5a531a9a6effb778c7b84a05f34a5c5318bbfe9d0b14cb8754c2d0520\" returns successfully" Sep 10 00:49:29.208074 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:49:29.208197 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:49:29.523739 containerd[1572]: time="2025-09-10T00:49:29.523674789Z" level=info msg="StopPodSandbox for \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\"" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.601 [INFO][3993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.602 [INFO][3993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" iface="eth0" netns="/var/run/netns/cni-e46bd0dc-55f2-2dd2-9ad3-7f96b2958646" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.602 [INFO][3993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" iface="eth0" netns="/var/run/netns/cni-e46bd0dc-55f2-2dd2-9ad3-7f96b2958646" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.603 [INFO][3993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" iface="eth0" netns="/var/run/netns/cni-e46bd0dc-55f2-2dd2-9ad3-7f96b2958646" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.603 [INFO][3993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.603 [INFO][3993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.690 [INFO][4003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.691 [INFO][4003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.691 [INFO][4003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.702 [WARNING][4003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.702 [INFO][4003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.703 [INFO][4003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:29.710345 containerd[1572]: 2025-09-10 00:49:29.706 [INFO][3993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:29.710345 containerd[1572]: time="2025-09-10T00:49:29.710217688Z" level=info msg="TearDown network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\" successfully" Sep 10 00:49:29.710345 containerd[1572]: time="2025-09-10T00:49:29.710260363Z" level=info msg="StopPodSandbox for \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\" returns successfully" Sep 10 00:49:29.767815 systemd[1]: run-netns-cni\x2de46bd0dc\x2d55f2\x2d2dd2\x2d9ad3\x2d7f96b2958646.mount: Deactivated successfully. Sep 10 00:49:29.887452 kubelet[2656]: I0910 00:49:29.887407 2656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b9d2\" (UniqueName: \"kubernetes.io/projected/c4b15be0-0be9-443e-b54d-5992872c55b9-kube-api-access-2b9d2\") pod \"c4b15be0-0be9-443e-b54d-5992872c55b9\" (UID: \"c4b15be0-0be9-443e-b54d-5992872c55b9\") " Sep 10 00:49:29.887452 kubelet[2656]: I0910 00:49:29.887470 2656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-ca-bundle\") pod \"c4b15be0-0be9-443e-b54d-5992872c55b9\" (UID: \"c4b15be0-0be9-443e-b54d-5992872c55b9\") " Sep 10 00:49:29.888073 kubelet[2656]: I0910 00:49:29.887497 2656 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-backend-key-pair\") pod \"c4b15be0-0be9-443e-b54d-5992872c55b9\" (UID: \"c4b15be0-0be9-443e-b54d-5992872c55b9\") " Sep 10 00:49:29.889444 kubelet[2656]: I0910 00:49:29.889405 2656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c4b15be0-0be9-443e-b54d-5992872c55b9" (UID: "c4b15be0-0be9-443e-b54d-5992872c55b9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:49:29.892613 kubelet[2656]: I0910 00:49:29.892565 2656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b15be0-0be9-443e-b54d-5992872c55b9-kube-api-access-2b9d2" (OuterVolumeSpecName: "kube-api-access-2b9d2") pod "c4b15be0-0be9-443e-b54d-5992872c55b9" (UID: "c4b15be0-0be9-443e-b54d-5992872c55b9"). InnerVolumeSpecName "kube-api-access-2b9d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:49:29.894295 kubelet[2656]: I0910 00:49:29.894261 2656 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c4b15be0-0be9-443e-b54d-5992872c55b9" (UID: "c4b15be0-0be9-443e-b54d-5992872c55b9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:49:29.895054 systemd[1]: var-lib-kubelet-pods-c4b15be0\x2d0be9\x2d443e\x2db54d\x2d5992872c55b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2b9d2.mount: Deactivated successfully. Sep 10 00:49:29.899392 systemd[1]: var-lib-kubelet-pods-c4b15be0\x2d0be9\x2d443e\x2db54d\x2d5992872c55b9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:49:29.987899 kubelet[2656]: I0910 00:49:29.987803 2656 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:49:29.987899 kubelet[2656]: I0910 00:49:29.987850 2656 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4b15be0-0be9-443e-b54d-5992872c55b9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:49:29.987899 kubelet[2656]: I0910 00:49:29.987860 2656 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b9d2\" (UniqueName: \"kubernetes.io/projected/c4b15be0-0be9-443e-b54d-5992872c55b9-kube-api-access-2b9d2\") on node \"localhost\" DevicePath \"\"" Sep 10 00:49:30.099818 systemd[1]: Started sshd@7-10.0.0.156:22-10.0.0.1:42400.service - OpenSSH per-connection server daemon (10.0.0.1:42400). Sep 10 00:49:30.196786 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 42400 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:30.198775 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:30.204508 systemd-logind[1548]: New session 8 of user core. Sep 10 00:49:30.216729 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:49:30.359042 sshd[4048]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:30.363833 systemd[1]: sshd@7-10.0.0.156:22-10.0.0.1:42400.service: Deactivated successfully. Sep 10 00:49:30.366856 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:49:30.367594 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:49:30.368597 systemd-logind[1548]: Removed session 8. Sep 10 00:49:30.604110 kubelet[2656]: I0910 00:49:30.604045 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mpv6l" podStartSLOduration=2.299805377 podStartE2EDuration="21.604013362s" podCreationTimestamp="2025-09-10 00:49:09 +0000 UTC" firstStartedPulling="2025-09-10 00:49:09.457380832 +0000 UTC m=+20.430271053" lastFinishedPulling="2025-09-10 00:49:28.761588817 +0000 UTC m=+39.734479038" observedRunningTime="2025-09-10 00:49:29.606669556 +0000 UTC m=+40.579559777" watchObservedRunningTime="2025-09-10 00:49:30.604013362 +0000 UTC m=+41.576903583" Sep 10 00:49:30.794175 kubelet[2656]: I0910 00:49:30.794092 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmg4\" (UniqueName: \"kubernetes.io/projected/38d900de-ca59-4de0-a58b-a39a52e4a897-kube-api-access-wdmg4\") pod \"whisker-5dc5785d88-w84ht\" (UID: \"38d900de-ca59-4de0-a58b-a39a52e4a897\") " pod="calico-system/whisker-5dc5785d88-w84ht" Sep 10 00:49:30.794175 kubelet[2656]: I0910 00:49:30.794175 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38d900de-ca59-4de0-a58b-a39a52e4a897-whisker-backend-key-pair\") pod \"whisker-5dc5785d88-w84ht\" (UID: \"38d900de-ca59-4de0-a58b-a39a52e4a897\") " pod="calico-system/whisker-5dc5785d88-w84ht" Sep 10 00:49:30.794357 kubelet[2656]: I0910 00:49:30.794194 2656 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38d900de-ca59-4de0-a58b-a39a52e4a897-whisker-ca-bundle\") pod \"whisker-5dc5785d88-w84ht\" (UID: \"38d900de-ca59-4de0-a58b-a39a52e4a897\") " pod="calico-system/whisker-5dc5785d88-w84ht" Sep 10 00:49:30.942913 containerd[1572]: time="2025-09-10T00:49:30.942515984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc5785d88-w84ht,Uid:38d900de-ca59-4de0-a58b-a39a52e4a897,Namespace:calico-system,Attempt:0,}" Sep 10 00:49:30.976296 kernel: bpftool[4220]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 10 00:49:31.119770 kubelet[2656]: I0910 00:49:31.119719 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b15be0-0be9-443e-b54d-5992872c55b9" path="/var/lib/kubelet/pods/c4b15be0-0be9-443e-b54d-5992872c55b9/volumes" Sep 10 00:49:31.305950 systemd-networkd[1243]: vxlan.calico: Link UP Sep 10 00:49:31.306443 systemd-networkd[1243]: vxlan.calico: Gained carrier Sep 10 00:49:31.495449 systemd-networkd[1243]: califb229733626: Link UP Sep 10 00:49:31.495886 systemd-networkd[1243]: califb229733626: Gained carrier Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.435 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5dc5785d88--w84ht-eth0 whisker-5dc5785d88- calico-system 38d900de-ca59-4de0-a58b-a39a52e4a897 989 0 2025-09-10 00:49:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dc5785d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5dc5785d88-w84ht eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califb229733626 [] [] }} ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.435 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.461 [INFO][4274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" HandleID="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Workload="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.461 [INFO][4274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" HandleID="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Workload="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5dc5785d88-w84ht", "timestamp":"2025-09-10 00:49:31.461640616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.461 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.461 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.461 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.468 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.473 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.476 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.478 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.479 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.479 [INFO][4274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.480 [INFO][4274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057 Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.484 [INFO][4274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.489 [INFO][4274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.489 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" host="localhost" Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.490 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:31.511415 containerd[1572]: 2025-09-10 00:49:31.490 [INFO][4274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" HandleID="k8s-pod-network.f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Workload="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" Sep 10 00:49:31.511976 containerd[1572]: 2025-09-10 00:49:31.493 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dc5785d88--w84ht-eth0", GenerateName:"whisker-5dc5785d88-", Namespace:"calico-system", SelfLink:"", UID:"38d900de-ca59-4de0-a58b-a39a52e4a897", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dc5785d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5dc5785d88-w84ht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califb229733626", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:31.511976 containerd[1572]: 2025-09-10 00:49:31.493 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" Sep 10 00:49:31.511976 containerd[1572]: 2025-09-10 00:49:31.493 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb229733626 ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" Sep 10 00:49:31.511976 containerd[1572]: 2025-09-10 00:49:31.496 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" Sep 10 00:49:31.511976 containerd[1572]: 2025-09-10 00:49:31.496 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dc5785d88--w84ht-eth0", GenerateName:"whisker-5dc5785d88-", Namespace:"calico-system", SelfLink:"", UID:"38d900de-ca59-4de0-a58b-a39a52e4a897", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dc5785d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057", Pod:"whisker-5dc5785d88-w84ht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califb229733626", MAC:"0e:f2:52:9f:03:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:31.511976 containerd[1572]: 2025-09-10 00:49:31.504 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057" Namespace="calico-system" Pod="whisker-5dc5785d88-w84ht" WorkloadEndpoint="localhost-k8s-whisker--5dc5785d88--w84ht-eth0" Sep 10 00:49:31.541275 containerd[1572]: time="2025-09-10T00:49:31.540469056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:31.541563 containerd[1572]: time="2025-09-10T00:49:31.541236384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:31.541563 containerd[1572]: time="2025-09-10T00:49:31.541301573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:31.541563 containerd[1572]: time="2025-09-10T00:49:31.541478425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:31.565923 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:31.600799 containerd[1572]: time="2025-09-10T00:49:31.600754895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dc5785d88-w84ht,Uid:38d900de-ca59-4de0-a58b-a39a52e4a897,Namespace:calico-system,Attempt:0,} returns sandbox id \"f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057\"" Sep 10 00:49:31.603148 containerd[1572]: time="2025-09-10T00:49:31.603119791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:49:32.962480 systemd-networkd[1243]: califb229733626: Gained IPv6LL Sep 10 00:49:33.026553 systemd-networkd[1243]: vxlan.calico: Gained IPv6LL Sep 10 00:49:33.118364 containerd[1572]: time="2025-09-10T00:49:33.118011800Z" level=info msg="StopPodSandbox for \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\"" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.165 [INFO][4382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.166 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" iface="eth0" netns="/var/run/netns/cni-cc882d94-9b50-5adf-963b-890803c6639f" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.166 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" iface="eth0" netns="/var/run/netns/cni-cc882d94-9b50-5adf-963b-890803c6639f" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.167 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" iface="eth0" netns="/var/run/netns/cni-cc882d94-9b50-5adf-963b-890803c6639f" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.167 [INFO][4382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.167 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.196 [INFO][4396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.196 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.196 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.202 [WARNING][4396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.202 [INFO][4396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.203 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:33.210233 containerd[1572]: 2025-09-10 00:49:33.206 [INFO][4382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:33.214545 containerd[1572]: time="2025-09-10T00:49:33.214273274Z" level=info msg="TearDown network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\" successfully" Sep 10 00:49:33.214545 containerd[1572]: time="2025-09-10T00:49:33.214327431Z" level=info msg="StopPodSandbox for \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\" returns successfully" Sep 10 00:49:33.215713 systemd[1]: run-netns-cni\x2dcc882d94\x2d9b50\x2d5adf\x2d963b\x2d890803c6639f.mount: Deactivated successfully. Sep 10 00:49:33.216164 containerd[1572]: time="2025-09-10T00:49:33.215878452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdgls,Uid:0dcf04d3-5b02-40dc-800c-0985ae063919,Namespace:calico-system,Attempt:1,}" Sep 10 00:49:33.256518 containerd[1572]: time="2025-09-10T00:49:33.256455129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:33.257343 containerd[1572]: time="2025-09-10T00:49:33.257289877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 10 00:49:33.261099 containerd[1572]: time="2025-09-10T00:49:33.261052661Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:33.265451 containerd[1572]: time="2025-09-10T00:49:33.264492523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:33.265631 containerd[1572]: time="2025-09-10T00:49:33.265596667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.662288261s" Sep 10 00:49:33.265704 containerd[1572]: time="2025-09-10T00:49:33.265687537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 10 00:49:33.269236 containerd[1572]: time="2025-09-10T00:49:33.269162840Z" level=info msg="CreateContainer within sandbox \"f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:49:33.286473 containerd[1572]: time="2025-09-10T00:49:33.286429777Z" level=info msg="CreateContainer within sandbox \"f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1ac7bc01a991513934e6431c92efe9aeee274ca80176a817437deeacc4393793\"" Sep 10 00:49:33.287524 containerd[1572]: time="2025-09-10T00:49:33.287476616Z" level=info msg="StartContainer for \"1ac7bc01a991513934e6431c92efe9aeee274ca80176a817437deeacc4393793\"" Sep 10 00:49:33.334471 systemd-networkd[1243]: calia587709d6e8: Link UP Sep 10 00:49:33.334696 systemd-networkd[1243]: calia587709d6e8: Gained carrier Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.265 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wdgls-eth0 csi-node-driver- calico-system 0dcf04d3-5b02-40dc-800c-0985ae063919 1006 0 2025-09-10 00:49:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wdgls eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia587709d6e8 [] [] }} ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.265 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.296 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" HandleID="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.296 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" HandleID="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wdgls", "timestamp":"2025-09-10 00:49:33.296215935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.296 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.296 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.296 [INFO][4418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.303 [INFO][4418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.307 [INFO][4418] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.310 [INFO][4418] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.312 [INFO][4418] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.313 [INFO][4418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.313 [INFO][4418] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.315 [INFO][4418] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781 Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.322 [INFO][4418] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.326 [INFO][4418] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.326 [INFO][4418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" host="localhost" Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.326 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:33.355543 containerd[1572]: 2025-09-10 00:49:33.327 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" HandleID="k8s-pod-network.886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.356133 containerd[1572]: 2025-09-10 00:49:33.331 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wdgls-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dcf04d3-5b02-40dc-800c-0985ae063919", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wdgls", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia587709d6e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:33.356133 containerd[1572]: 2025-09-10 00:49:33.331 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.356133 containerd[1572]: 2025-09-10 00:49:33.331 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia587709d6e8 ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.356133 containerd[1572]: 2025-09-10 00:49:33.333 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.356133 containerd[1572]: 2025-09-10 00:49:33.335 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wdgls-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dcf04d3-5b02-40dc-800c-0985ae063919", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781", Pod:"csi-node-driver-wdgls", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia587709d6e8", MAC:"62:83:ed:4f:04:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:33.356133 containerd[1572]: 2025-09-10 00:49:33.349 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781" Namespace="calico-system" Pod="csi-node-driver-wdgls" WorkloadEndpoint="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:33.375237 containerd[1572]: time="2025-09-10T00:49:33.375187194Z" level=info msg="StartContainer for \"1ac7bc01a991513934e6431c92efe9aeee274ca80176a817437deeacc4393793\" returns successfully" Sep 10 00:49:33.378689 containerd[1572]: time="2025-09-10T00:49:33.378646306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:49:33.384866 containerd[1572]: time="2025-09-10T00:49:33.384563308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:33.384866 containerd[1572]: time="2025-09-10T00:49:33.384628739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:33.384866 containerd[1572]: time="2025-09-10T00:49:33.384643587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:33.384866 containerd[1572]: time="2025-09-10T00:49:33.384752924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:33.414961 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:33.430859 containerd[1572]: time="2025-09-10T00:49:33.430820117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wdgls,Uid:0dcf04d3-5b02-40dc-800c-0985ae063919,Namespace:calico-system,Attempt:1,} returns sandbox id \"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781\"" Sep 10 00:49:34.498478 systemd-networkd[1243]: calia587709d6e8: Gained IPv6LL Sep 10 00:49:35.117907 containerd[1572]: time="2025-09-10T00:49:35.117481742Z" level=info msg="StopPodSandbox for \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\"" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.164 [INFO][4525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.164 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" iface="eth0" netns="/var/run/netns/cni-6775e585-f7f8-80c3-16d8-6579a561a304" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.165 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" iface="eth0" netns="/var/run/netns/cni-6775e585-f7f8-80c3-16d8-6579a561a304" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.166 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" iface="eth0" netns="/var/run/netns/cni-6775e585-f7f8-80c3-16d8-6579a561a304" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.166 [INFO][4525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.166 [INFO][4525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.195 [INFO][4538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.195 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.195 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.199 [WARNING][4538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.199 [INFO][4538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.200 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:35.206593 containerd[1572]: 2025-09-10 00:49:35.203 [INFO][4525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:35.207105 containerd[1572]: time="2025-09-10T00:49:35.206997893Z" level=info msg="TearDown network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\" successfully" Sep 10 00:49:35.207105 containerd[1572]: time="2025-09-10T00:49:35.207035177Z" level=info msg="StopPodSandbox for \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\" returns successfully" Sep 10 00:49:35.208365 containerd[1572]: time="2025-09-10T00:49:35.208333592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-rwxfm,Uid:84107d46-758d-49e3-9857-f23097a54b8f,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:49:35.210660 systemd[1]: run-netns-cni\x2d6775e585\x2df7f8\x2d80c3\x2d16d8\x2d6579a561a304.mount: Deactivated successfully. Sep 10 00:49:35.367507 systemd[1]: Started sshd@8-10.0.0.156:22-10.0.0.1:42404.service - OpenSSH per-connection server daemon (10.0.0.1:42404). Sep 10 00:49:35.392852 systemd-networkd[1243]: cali241eea38a0d: Link UP Sep 10 00:49:35.393341 systemd-networkd[1243]: cali241eea38a0d: Gained carrier Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.265 [INFO][4545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0 calico-apiserver-6d4bb6c97d- calico-apiserver 84107d46-758d-49e3-9857-f23097a54b8f 1025 0 2025-09-10 00:49:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4bb6c97d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d4bb6c97d-rwxfm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali241eea38a0d [] [] }} ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.265 [INFO][4545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.300 [INFO][4559] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" HandleID="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.300 [INFO][4559] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" HandleID="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d4bb6c97d-rwxfm", "timestamp":"2025-09-10 00:49:35.300485489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.300 [INFO][4559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.300 [INFO][4559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.300 [INFO][4559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.307 [INFO][4559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.312 [INFO][4559] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.317 [INFO][4559] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.321 [INFO][4559] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.323 [INFO][4559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.323 [INFO][4559] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.324 [INFO][4559] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896 Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.344 [INFO][4559] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.383 [INFO][4559] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.383 [INFO][4559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" host="localhost" Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.383 [INFO][4559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:35.410338 containerd[1572]: 2025-09-10 00:49:35.383 [INFO][4559] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" HandleID="k8s-pod-network.186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.411104 containerd[1572]: 2025-09-10 00:49:35.387 [INFO][4545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"84107d46-758d-49e3-9857-f23097a54b8f", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d4bb6c97d-rwxfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali241eea38a0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:35.411104 containerd[1572]: 2025-09-10 00:49:35.388 [INFO][4545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.411104 containerd[1572]: 2025-09-10 00:49:35.388 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali241eea38a0d ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.411104 containerd[1572]: 2025-09-10 00:49:35.390 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.411104 containerd[1572]: 2025-09-10 00:49:35.394 [INFO][4545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"84107d46-758d-49e3-9857-f23097a54b8f", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896", Pod:"calico-apiserver-6d4bb6c97d-rwxfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali241eea38a0d", MAC:"be:f6:28:7a:58:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:35.411104 containerd[1572]: 2025-09-10 00:49:35.406 [INFO][4545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-rwxfm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:35.431339 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 42404 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:35.432307 sshd[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:35.438462 systemd-logind[1548]: New session 9 of user core. Sep 10 00:49:35.440982 containerd[1572]: time="2025-09-10T00:49:35.440861859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:35.441295 containerd[1572]: time="2025-09-10T00:49:35.441063910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:35.441295 containerd[1572]: time="2025-09-10T00:49:35.441083329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:35.441295 containerd[1572]: time="2025-09-10T00:49:35.441193988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:35.446631 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:49:35.478633 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:35.534001 containerd[1572]: time="2025-09-10T00:49:35.533956215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-rwxfm,Uid:84107d46-758d-49e3-9857-f23097a54b8f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896\"" Sep 10 00:49:35.607858 sshd[4568]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:35.612291 systemd[1]: sshd@8-10.0.0.156:22-10.0.0.1:42404.service: Deactivated successfully. Sep 10 00:49:35.616070 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:49:35.617359 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:49:35.618641 systemd-logind[1548]: Removed session 9. Sep 10 00:49:35.653812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652511941.mount: Deactivated successfully. Sep 10 00:49:36.117443 containerd[1572]: time="2025-09-10T00:49:36.117376804Z" level=info msg="StopPodSandbox for \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\"" Sep 10 00:49:36.118198 containerd[1572]: time="2025-09-10T00:49:36.117873278Z" level=info msg="StopPodSandbox for \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\"" Sep 10 00:49:36.118198 containerd[1572]: time="2025-09-10T00:49:36.117957034Z" level=info msg="StopPodSandbox for \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\"" Sep 10 00:49:36.994502 systemd-networkd[1243]: cali241eea38a0d: Gained IPv6LL Sep 10 00:49:37.118161 containerd[1572]: time="2025-09-10T00:49:37.118105446Z" level=info msg="StopPodSandbox for \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\"" Sep 10 00:49:37.332510 containerd[1572]: time="2025-09-10T00:49:37.332468134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:37.410036 containerd[1572]: time="2025-09-10T00:49:37.409962766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 10 00:49:37.463621 containerd[1572]: time="2025-09-10T00:49:37.463547970Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:37.480410 containerd[1572]: time="2025-09-10T00:49:37.480326950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:37.483894 containerd[1572]: time="2025-09-10T00:49:37.483817421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.10512319s" Sep 10 00:49:37.483894 containerd[1572]: time="2025-09-10T00:49:37.483871497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 10 00:49:37.491219 containerd[1572]: time="2025-09-10T00:49:37.491168045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:49:37.492358 containerd[1572]: time="2025-09-10T00:49:37.492312880Z" level=info msg="CreateContainer within sandbox \"f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.473 [INFO][4682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.474 [INFO][4682] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" iface="eth0" netns="/var/run/netns/cni-a3437859-d229-7731-e8f6-dd3044921ad3" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.474 [INFO][4682] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" iface="eth0" netns="/var/run/netns/cni-a3437859-d229-7731-e8f6-dd3044921ad3" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4682] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" iface="eth0" netns="/var/run/netns/cni-a3437859-d229-7731-e8f6-dd3044921ad3" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.509 [INFO][4724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.509 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.509 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.555 [WARNING][4724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.555 [INFO][4724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.557 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:37.565849 containerd[1572]: 2025-09-10 00:49:37.562 [INFO][4682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:37.570264 containerd[1572]: time="2025-09-10T00:49:37.568374475Z" level=info msg="TearDown network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\" successfully" Sep 10 00:49:37.570264 containerd[1572]: time="2025-09-10T00:49:37.568412410Z" level=info msg="StopPodSandbox for \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\" returns successfully" Sep 10 00:49:37.571166 containerd[1572]: time="2025-09-10T00:49:37.570932490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-crcn2,Uid:d173b8ad-d1e2-4ab1-adf4-074e01bc5a59,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:49:37.572423 systemd[1]: run-netns-cni\x2da3437859\x2dd229\x2d7731\x2de8f6\x2ddd3044921ad3.mount: Deactivated successfully. Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.474 [INFO][4674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.474 [INFO][4674] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" iface="eth0" netns="/var/run/netns/cni-9f9edd02-a419-9c73-3844-4c2b202f2a4e" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4674] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" iface="eth0" netns="/var/run/netns/cni-9f9edd02-a419-9c73-3844-4c2b202f2a4e" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.477 [INFO][4674] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" iface="eth0" netns="/var/run/netns/cni-9f9edd02-a419-9c73-3844-4c2b202f2a4e" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.477 [INFO][4674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.477 [INFO][4674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.510 [INFO][4727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.511 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.558 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.563 [WARNING][4727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.563 [INFO][4727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.653 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:37.661444 containerd[1572]: 2025-09-10 00:49:37.656 [INFO][4674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:37.662000 containerd[1572]: time="2025-09-10T00:49:37.661522189Z" level=info msg="TearDown network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\" successfully" Sep 10 00:49:37.662000 containerd[1572]: time="2025-09-10T00:49:37.661553661Z" level=info msg="StopPodSandbox for \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\" returns successfully" Sep 10 00:49:37.664312 containerd[1572]: time="2025-09-10T00:49:37.664263757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cvrzx,Uid:27842233-86b6-4b4f-9f34-c14a660239b3,Namespace:calico-system,Attempt:1,}" Sep 10 00:49:37.669169 systemd[1]: run-netns-cni\x2d9f9edd02\x2da419\x2d9c73\x2d3844\x2d4c2b202f2a4e.mount: Deactivated successfully. Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.474 [INFO][4673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.474 [INFO][4673] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" iface="eth0" netns="/var/run/netns/cni-27e12c3f-8297-f3ab-86b2-6376f43b0b1a" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4673] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" iface="eth0" netns="/var/run/netns/cni-27e12c3f-8297-f3ab-86b2-6376f43b0b1a" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4673] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" iface="eth0" netns="/var/run/netns/cni-27e12c3f-8297-f3ab-86b2-6376f43b0b1a" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.475 [INFO][4673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.524 [INFO][4725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.524 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.653 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.662 [WARNING][4725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.662 [INFO][4725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.666 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:37.676694 containerd[1572]: 2025-09-10 00:49:37.672 [INFO][4673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:37.676694 containerd[1572]: time="2025-09-10T00:49:37.675606174Z" level=info msg="TearDown network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\" successfully" Sep 10 00:49:37.676694 containerd[1572]: time="2025-09-10T00:49:37.675648969Z" level=info msg="StopPodSandbox for \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\" returns successfully" Sep 10 00:49:37.677312 kubelet[2656]: E0910 00:49:37.676019 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:37.677726 containerd[1572]: time="2025-09-10T00:49:37.676802592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7hl4n,Uid:153fa74c-658f-4fb6-a953-ee6d15f43e30,Namespace:kube-system,Attempt:1,}" Sep 10 00:49:37.680536 systemd[1]: run-netns-cni\x2d27e12c3f\x2d8297\x2df3ab\x2d86b2\x2d6376f43b0b1a.mount: Deactivated successfully. Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.554 [INFO][4712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.555 [INFO][4712] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" iface="eth0" netns="/var/run/netns/cni-cd6bcb4c-ad54-44e7-4b89-0a3e0bade9dd" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.555 [INFO][4712] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" iface="eth0" netns="/var/run/netns/cni-cd6bcb4c-ad54-44e7-4b89-0a3e0bade9dd" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.555 [INFO][4712] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" iface="eth0" netns="/var/run/netns/cni-cd6bcb4c-ad54-44e7-4b89-0a3e0bade9dd" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.555 [INFO][4712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.555 [INFO][4712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.582 [INFO][4750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.582 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.666 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.672 [WARNING][4750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.672 [INFO][4750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.673 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:37.686276 containerd[1572]: 2025-09-10 00:49:37.679 [INFO][4712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:37.688451 containerd[1572]: time="2025-09-10T00:49:37.688378201Z" level=info msg="TearDown network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\" successfully" Sep 10 00:49:37.688451 containerd[1572]: time="2025-09-10T00:49:37.688421787Z" level=info msg="StopPodSandbox for \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\" returns successfully" Sep 10 00:49:37.689372 kubelet[2656]: E0910 00:49:37.689341 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:37.690477 containerd[1572]: time="2025-09-10T00:49:37.690099790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mmgft,Uid:a6d2bfa2-402e-4e85-8163-cc4a99a0ed75,Namespace:kube-system,Attempt:1,}" Sep 10 00:49:37.698128 containerd[1572]: time="2025-09-10T00:49:37.698081142Z" level=info msg="CreateContainer within sandbox \"f697da88c729462e9272ab23fb004b8c0a8de84d945464344c6cc29a69f42057\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9fe7611c1bd8b560a506936ce02247e19a1fa7e853a6fa7150866c6bde72b6df\"" Sep 10 00:49:37.700563 containerd[1572]: time="2025-09-10T00:49:37.700519199Z" level=info msg="StartContainer for \"9fe7611c1bd8b560a506936ce02247e19a1fa7e853a6fa7150866c6bde72b6df\"" Sep 10 00:49:37.885526 containerd[1572]: time="2025-09-10T00:49:37.885393873Z" level=info msg="StartContainer for \"9fe7611c1bd8b560a506936ce02247e19a1fa7e853a6fa7150866c6bde72b6df\" returns successfully" Sep 10 00:49:37.896258 systemd-networkd[1243]: cali050ff5cd593: Link UP Sep 10 00:49:37.897354 systemd-networkd[1243]: cali050ff5cd593: Gained carrier Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.759 [INFO][4771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--cvrzx-eth0 goldmane-7988f88666- calico-system 27842233-86b6-4b4f-9f34-c14a660239b3 1042 0 2025-09-10 00:49:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-cvrzx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali050ff5cd593 [] [] }} ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.759 [INFO][4771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.823 [INFO][4830] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" HandleID="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.825 [INFO][4830] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" HandleID="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000538190), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-cvrzx", "timestamp":"2025-09-10 00:49:37.823876013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.825 [INFO][4830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.825 [INFO][4830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.825 [INFO][4830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.835 [INFO][4830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.841 [INFO][4830] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.850 [INFO][4830] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.852 [INFO][4830] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.855 [INFO][4830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.855 [INFO][4830] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.857 [INFO][4830] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8 Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.868 [INFO][4830] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.875 [INFO][4830] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.875 [INFO][4830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" host="localhost" Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.875 [INFO][4830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:37.932055 containerd[1572]: 2025-09-10 00:49:37.875 [INFO][4830] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" HandleID="k8s-pod-network.84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.933396 containerd[1572]: 2025-09-10 00:49:37.892 [INFO][4771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cvrzx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"27842233-86b6-4b4f-9f34-c14a660239b3", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-cvrzx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali050ff5cd593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:37.933396 containerd[1572]: 2025-09-10 00:49:37.892 [INFO][4771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.933396 containerd[1572]: 2025-09-10 00:49:37.892 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali050ff5cd593 ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.933396 containerd[1572]: 2025-09-10 00:49:37.897 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.933396 containerd[1572]: 2025-09-10 00:49:37.899 [INFO][4771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cvrzx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"27842233-86b6-4b4f-9f34-c14a660239b3", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8", Pod:"goldmane-7988f88666-cvrzx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali050ff5cd593", MAC:"62:f7:cb:88:39:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:37.933396 containerd[1572]: 2025-09-10 00:49:37.917 [INFO][4771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8" Namespace="calico-system" Pod="goldmane-7988f88666-cvrzx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:37.974013 containerd[1572]: time="2025-09-10T00:49:37.973396950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:37.974013 containerd[1572]: time="2025-09-10T00:49:37.973459775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:37.974013 containerd[1572]: time="2025-09-10T00:49:37.973491247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:37.974013 containerd[1572]: time="2025-09-10T00:49:37.973599001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:37.991306 systemd-networkd[1243]: calia53281ce7ec: Link UP Sep 10 00:49:37.993313 systemd-networkd[1243]: calia53281ce7ec: Gained carrier Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.775 [INFO][4759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0 calico-apiserver-6d4bb6c97d- calico-apiserver d173b8ad-d1e2-4ab1-adf4-074e01bc5a59 1041 0 2025-09-10 00:49:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4bb6c97d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d4bb6c97d-crcn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia53281ce7ec [] [] }} ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.775 [INFO][4759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.829 [INFO][4846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" HandleID="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.830 [INFO][4846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" HandleID="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a2f90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d4bb6c97d-crcn2", "timestamp":"2025-09-10 00:49:37.829831575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.831 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.876 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.876 [INFO][4846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.935 [INFO][4846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.941 [INFO][4846] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.948 [INFO][4846] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.953 [INFO][4846] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.959 [INFO][4846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.959 [INFO][4846] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.964 [INFO][4846] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77 Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.969 [INFO][4846] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.979 [INFO][4846] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.979 [INFO][4846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" host="localhost" Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.979 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:38.016552 containerd[1572]: 2025-09-10 00:49:37.979 [INFO][4846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" HandleID="k8s-pod-network.5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:38.017114 containerd[1572]: 2025-09-10 00:49:37.984 [INFO][4759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d4bb6c97d-crcn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53281ce7ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.017114 containerd[1572]: 2025-09-10 00:49:37.984 [INFO][4759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:38.017114 containerd[1572]: 2025-09-10 00:49:37.984 [INFO][4759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia53281ce7ec ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:38.017114 containerd[1572]: 2025-09-10 00:49:37.998 [INFO][4759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:38.017114 containerd[1572]: 2025-09-10 00:49:37.998 [INFO][4759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77", Pod:"calico-apiserver-6d4bb6c97d-crcn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53281ce7ec", MAC:"6e:45:4b:b9:3a:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.017114 containerd[1572]: 2025-09-10 00:49:38.012 [INFO][4759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77" Namespace="calico-apiserver" Pod="calico-apiserver-6d4bb6c97d-crcn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:38.027303 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:38.040052 containerd[1572]: time="2025-09-10T00:49:38.039935657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:38.040052 containerd[1572]: time="2025-09-10T00:49:38.039990847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:38.040052 containerd[1572]: time="2025-09-10T00:49:38.040016897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.040298 containerd[1572]: time="2025-09-10T00:49:38.040141704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.087284 containerd[1572]: time="2025-09-10T00:49:38.087216023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cvrzx,Uid:27842233-86b6-4b4f-9f34-c14a660239b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8\"" Sep 10 00:49:38.087444 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:38.089457 systemd-networkd[1243]: cali4a7338993f5: Link UP Sep 10 00:49:38.091518 systemd-networkd[1243]: cali4a7338993f5: Gained carrier Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:37.816 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0 coredns-7c65d6cfc9- kube-system 153fa74c-658f-4fb6-a953-ee6d15f43e30 1043 0 2025-09-10 00:48:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7hl4n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a7338993f5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:37.817 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:37.886 [INFO][4859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" HandleID="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:37.886 [INFO][4859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" HandleID="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7hl4n", "timestamp":"2025-09-10 00:49:37.886377691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:37.886 [INFO][4859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:37.979 [INFO][4859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:37.980 [INFO][4859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.036 [INFO][4859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.041 [INFO][4859] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.050 [INFO][4859] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.052 [INFO][4859] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.054 [INFO][4859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.054 [INFO][4859] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.057 [INFO][4859] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265 Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.061 [INFO][4859] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.069 [INFO][4859] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.069 [INFO][4859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" host="localhost" Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.069 [INFO][4859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:38.116897 containerd[1572]: 2025-09-10 00:49:38.069 [INFO][4859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" HandleID="k8s-pod-network.4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:38.118173 containerd[1572]: 2025-09-10 00:49:38.083 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"153fa74c-658f-4fb6-a953-ee6d15f43e30", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7hl4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a7338993f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.118173 containerd[1572]: 2025-09-10 00:49:38.083 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:38.118173 containerd[1572]: 2025-09-10 00:49:38.084 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a7338993f5 ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:38.118173 containerd[1572]: 2025-09-10 00:49:38.089 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:38.118173 containerd[1572]: 2025-09-10 00:49:38.091 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"153fa74c-658f-4fb6-a953-ee6d15f43e30", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265", Pod:"coredns-7c65d6cfc9-7hl4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a7338993f5", MAC:"3a:e7:39:b9:59:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.118173 containerd[1572]: 2025-09-10 00:49:38.109 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7hl4n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:38.119128 containerd[1572]: time="2025-09-10T00:49:38.118773842Z" level=info msg="StopPodSandbox for \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\"" Sep 10 00:49:38.136626 containerd[1572]: time="2025-09-10T00:49:38.136503111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4bb6c97d-crcn2,Uid:d173b8ad-d1e2-4ab1-adf4-074e01bc5a59,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77\"" Sep 10 00:49:38.149710 containerd[1572]: time="2025-09-10T00:49:38.149569864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:38.150014 containerd[1572]: time="2025-09-10T00:49:38.149660343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:38.150122 containerd[1572]: time="2025-09-10T00:49:38.149793887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.150603 containerd[1572]: time="2025-09-10T00:49:38.150442190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.186294 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:38.191626 systemd-networkd[1243]: cali9b2184f2fba: Link UP Sep 10 00:49:38.196388 systemd-networkd[1243]: cali9b2184f2fba: Gained carrier Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:37.823 [INFO][4799] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0 coredns-7c65d6cfc9- kube-system a6d2bfa2-402e-4e85-8163-cc4a99a0ed75 1047 0 2025-09-10 00:48:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mmgft eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b2184f2fba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:37.823 [INFO][4799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:37.898 [INFO][4866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" HandleID="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:37.898 [INFO][4866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" HandleID="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510910), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mmgft", "timestamp":"2025-09-10 00:49:37.898464792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:37.901 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.069 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.070 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.138 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.145 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.149 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.151 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.153 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.153 [INFO][4866] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.155 [INFO][4866] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727 Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.160 [INFO][4866] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.170 [INFO][4866] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.170 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" host="localhost" Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.170 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:38.220462 containerd[1572]: 2025-09-10 00:49:38.170 [INFO][4866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" HandleID="k8s-pod-network.f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:38.221180 containerd[1572]: 2025-09-10 00:49:38.175 [INFO][4799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mmgft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b2184f2fba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.221180 containerd[1572]: 2025-09-10 00:49:38.175 [INFO][4799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:38.221180 containerd[1572]: 2025-09-10 00:49:38.175 [INFO][4799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b2184f2fba ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:38.221180 containerd[1572]: 2025-09-10 00:49:38.199 [INFO][4799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:38.221180 containerd[1572]: 2025-09-10 00:49:38.201 [INFO][4799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727", Pod:"coredns-7c65d6cfc9-mmgft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b2184f2fba", MAC:"06:6e:be:e2:fd:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.221180 containerd[1572]: 2025-09-10 00:49:38.215 [INFO][4799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mmgft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:38.222800 containerd[1572]: time="2025-09-10T00:49:38.222740926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7hl4n,Uid:153fa74c-658f-4fb6-a953-ee6d15f43e30,Namespace:kube-system,Attempt:1,} returns sandbox id \"4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265\"" Sep 10 00:49:38.225314 kubelet[2656]: E0910 00:49:38.224551 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:38.229081 containerd[1572]: time="2025-09-10T00:49:38.228806306Z" level=info msg="CreateContainer within sandbox \"4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:49:38.251039 containerd[1572]: time="2025-09-10T00:49:38.250450561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:38.251039 containerd[1572]: time="2025-09-10T00:49:38.250548995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:38.251039 containerd[1572]: time="2025-09-10T00:49:38.250564056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.251039 containerd[1572]: time="2025-09-10T00:49:38.250851074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.184 [INFO][5018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.184 [INFO][5018] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" iface="eth0" netns="/var/run/netns/cni-e7d12e37-56be-b2bd-17bb-b10c61835cea" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.187 [INFO][5018] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" iface="eth0" netns="/var/run/netns/cni-e7d12e37-56be-b2bd-17bb-b10c61835cea" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.190 [INFO][5018] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" iface="eth0" netns="/var/run/netns/cni-e7d12e37-56be-b2bd-17bb-b10c61835cea" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.190 [INFO][5018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.190 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.234 [INFO][5058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.235 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.235 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.243 [WARNING][5058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.243 [INFO][5058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.244 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:38.253855 containerd[1572]: 2025-09-10 00:49:38.249 [INFO][5018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:38.255487 containerd[1572]: time="2025-09-10T00:49:38.254024913Z" level=info msg="TearDown network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\" successfully" Sep 10 00:49:38.255487 containerd[1572]: time="2025-09-10T00:49:38.254048740Z" level=info msg="StopPodSandbox for \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\" returns successfully" Sep 10 00:49:38.255487 containerd[1572]: time="2025-09-10T00:49:38.254815467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4cc87785-mvnvv,Uid:05560b5f-0c2f-48f7-b5ef-fa1beda53e1d,Namespace:calico-system,Attempt:1,}" Sep 10 00:49:38.257291 containerd[1572]: time="2025-09-10T00:49:38.257225576Z" level=info msg="CreateContainer within sandbox \"4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c78482c50d5d4c764cb6468b199f581cf9ea33a30f037f701b14ce3ebd981e61\"" Sep 10 00:49:38.258211 containerd[1572]: time="2025-09-10T00:49:38.258174604Z" level=info msg="StartContainer for \"c78482c50d5d4c764cb6468b199f581cf9ea33a30f037f701b14ce3ebd981e61\"" Sep 10 00:49:38.289057 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:38.332674 containerd[1572]: time="2025-09-10T00:49:38.332610869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mmgft,Uid:a6d2bfa2-402e-4e85-8163-cc4a99a0ed75,Namespace:kube-system,Attempt:1,} returns sandbox id \"f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727\"" Sep 10 00:49:38.334191 kubelet[2656]: E0910 00:49:38.334068 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:38.337351 containerd[1572]: time="2025-09-10T00:49:38.337287192Z" level=info msg="CreateContainer within sandbox \"f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:49:38.347285 containerd[1572]: time="2025-09-10T00:49:38.347011832Z" level=info msg="StartContainer for \"c78482c50d5d4c764cb6468b199f581cf9ea33a30f037f701b14ce3ebd981e61\" returns successfully" Sep 10 00:49:38.401807 containerd[1572]: time="2025-09-10T00:49:38.401743341Z" level=info msg="CreateContainer within sandbox \"f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b5e466e6b7f775c59e1b1600788d5fd43fde277276a9fef7130c0044f2b507c\"" Sep 10 00:49:38.402584 containerd[1572]: time="2025-09-10T00:49:38.402554566Z" level=info msg="StartContainer for \"4b5e466e6b7f775c59e1b1600788d5fd43fde277276a9fef7130c0044f2b507c\"" Sep 10 00:49:38.437073 systemd-networkd[1243]: cali611ec5823ae: Link UP Sep 10 00:49:38.438235 systemd-networkd[1243]: cali611ec5823ae: Gained carrier Sep 10 00:49:38.507817 containerd[1572]: time="2025-09-10T00:49:38.507748187Z" level=info msg="StartContainer for \"4b5e466e6b7f775c59e1b1600788d5fd43fde277276a9fef7130c0044f2b507c\" returns successfully" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.325 [INFO][5121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0 calico-kube-controllers-7b4cc87785- calico-system 05560b5f-0c2f-48f7-b5ef-fa1beda53e1d 1073 0 2025-09-10 00:49:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b4cc87785 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b4cc87785-mvnvv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali611ec5823ae [] [] }} ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.326 [INFO][5121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.363 [INFO][5170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" HandleID="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.363 [INFO][5170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" HandleID="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b4cc87785-mvnvv", "timestamp":"2025-09-10 00:49:38.363042088 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.363 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.363 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.363 [INFO][5170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.383 [INFO][5170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.388 [INFO][5170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.392 [INFO][5170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.394 [INFO][5170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.397 [INFO][5170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.397 [INFO][5170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.398 [INFO][5170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.409 [INFO][5170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.425 [INFO][5170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.425 [INFO][5170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" host="localhost" Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.425 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:38.521951 containerd[1572]: 2025-09-10 00:49:38.425 [INFO][5170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" HandleID="k8s-pod-network.480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.522522 containerd[1572]: 2025-09-10 00:49:38.434 [INFO][5121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0", GenerateName:"calico-kube-controllers-7b4cc87785-", Namespace:"calico-system", SelfLink:"", UID:"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4cc87785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b4cc87785-mvnvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali611ec5823ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.522522 containerd[1572]: 2025-09-10 00:49:38.434 [INFO][5121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.522522 containerd[1572]: 2025-09-10 00:49:38.434 [INFO][5121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali611ec5823ae ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.522522 containerd[1572]: 2025-09-10 00:49:38.438 [INFO][5121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.522522 containerd[1572]: 2025-09-10 00:49:38.439 [INFO][5121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0", GenerateName:"calico-kube-controllers-7b4cc87785-", Namespace:"calico-system", SelfLink:"", UID:"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4cc87785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac", Pod:"calico-kube-controllers-7b4cc87785-mvnvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali611ec5823ae", MAC:"82:a3:a0:59:5a:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:38.522522 containerd[1572]: 2025-09-10 00:49:38.518 [INFO][5121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac" Namespace="calico-system" Pod="calico-kube-controllers-7b4cc87785-mvnvv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:38.581531 systemd[1]: run-netns-cni\x2dcd6bcb4c\x2dad54\x2d44e7\x2d4b89\x2d0a3e0bade9dd.mount: Deactivated successfully. Sep 10 00:49:38.582163 systemd[1]: run-netns-cni\x2de7d12e37\x2d56be\x2db2bd\x2d17bb\x2db10c61835cea.mount: Deactivated successfully. Sep 10 00:49:38.585579 containerd[1572]: time="2025-09-10T00:49:38.585459061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:49:38.585666 containerd[1572]: time="2025-09-10T00:49:38.585581023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:49:38.585666 containerd[1572]: time="2025-09-10T00:49:38.585601183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.586504 containerd[1572]: time="2025-09-10T00:49:38.586398560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:49:38.617423 kubelet[2656]: E0910 00:49:38.617374 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:38.620507 kubelet[2656]: E0910 00:49:38.620175 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:38.627405 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:49:38.659212 containerd[1572]: time="2025-09-10T00:49:38.659149803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b4cc87785-mvnvv,Uid:05560b5f-0c2f-48f7-b5ef-fa1beda53e1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac\"" Sep 10 00:49:38.732722 kubelet[2656]: I0910 00:49:38.731040 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7hl4n" podStartSLOduration=43.731015292 podStartE2EDuration="43.731015292s" podCreationTimestamp="2025-09-10 00:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:49:38.690515721 +0000 UTC m=+49.663405942" watchObservedRunningTime="2025-09-10 00:49:38.731015292 +0000 UTC m=+49.703905513" Sep 10 00:49:39.133000 kubelet[2656]: I0910 00:49:39.132930 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5dc5785d88-w84ht" podStartSLOduration=3.2449903669999998 podStartE2EDuration="9.132906254s" podCreationTimestamp="2025-09-10 00:49:30 +0000 UTC" firstStartedPulling="2025-09-10 00:49:31.602518074 +0000 UTC m=+42.575408295" lastFinishedPulling="2025-09-10 00:49:37.490433961 +0000 UTC m=+48.463324182" observedRunningTime="2025-09-10 00:49:38.733894748 +0000 UTC m=+49.706784970" watchObservedRunningTime="2025-09-10 00:49:39.132906254 +0000 UTC m=+50.105796476" Sep 10 00:49:39.298461 systemd-networkd[1243]: calia53281ce7ec: Gained IPv6LL Sep 10 00:49:39.554450 systemd-networkd[1243]: cali4a7338993f5: Gained IPv6LL Sep 10 00:49:39.624016 kubelet[2656]: E0910 00:49:39.623616 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:39.624486 kubelet[2656]: E0910 00:49:39.624276 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:39.648859 kubelet[2656]: I0910 00:49:39.648703 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mmgft" podStartSLOduration=44.648683951 podStartE2EDuration="44.648683951s" podCreationTimestamp="2025-09-10 00:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:49:39.133352748 +0000 UTC m=+50.106242969" watchObservedRunningTime="2025-09-10 00:49:39.648683951 +0000 UTC m=+50.621574172" Sep 10 00:49:39.749224 systemd-networkd[1243]: cali050ff5cd593: Gained IPv6LL Sep 10 00:49:39.802175 containerd[1572]: time="2025-09-10T00:49:39.802097311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:39.802808 containerd[1572]: time="2025-09-10T00:49:39.802736695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 10 00:49:39.803953 containerd[1572]: time="2025-09-10T00:49:39.803915626Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:39.806834 containerd[1572]: time="2025-09-10T00:49:39.806730372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:39.807459 containerd[1572]: time="2025-09-10T00:49:39.807402271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.316186161s" Sep 10 00:49:39.807459 containerd[1572]: time="2025-09-10T00:49:39.807435296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 10 00:49:39.809435 containerd[1572]: time="2025-09-10T00:49:39.809272308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:49:39.809745 containerd[1572]: time="2025-09-10T00:49:39.809715465Z" level=info msg="CreateContainer within sandbox \"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:49:39.828068 containerd[1572]: time="2025-09-10T00:49:39.828013795Z" level=info msg="CreateContainer within sandbox \"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ce604945d32a9d731caa373f83ad8919e7583421392812114abfa8e96961616d\"" Sep 10 00:49:39.828959 containerd[1572]: time="2025-09-10T00:49:39.828916049Z" level=info msg="StartContainer for \"ce604945d32a9d731caa373f83ad8919e7583421392812114abfa8e96961616d\"" Sep 10 00:49:40.061545 containerd[1572]: time="2025-09-10T00:49:40.061190418Z" level=info msg="StartContainer for \"ce604945d32a9d731caa373f83ad8919e7583421392812114abfa8e96961616d\" returns successfully" Sep 10 00:49:40.066465 systemd-networkd[1243]: cali9b2184f2fba: Gained IPv6LL Sep 10 00:49:40.386523 systemd-networkd[1243]: cali611ec5823ae: Gained IPv6LL Sep 10 00:49:40.621589 systemd[1]: Started sshd@9-10.0.0.156:22-10.0.0.1:45526.service - OpenSSH per-connection server daemon (10.0.0.1:45526). Sep 10 00:49:40.627641 kubelet[2656]: E0910 00:49:40.627601 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:40.628477 kubelet[2656]: E0910 00:49:40.628233 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:49:40.662227 sshd[5326]: Accepted publickey for core from 10.0.0.1 port 45526 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:40.664674 sshd[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:40.670568 systemd-logind[1548]: New session 10 of user core. Sep 10 00:49:40.675731 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:49:40.895185 sshd[5326]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:40.900519 systemd[1]: sshd@9-10.0.0.156:22-10.0.0.1:45526.service: Deactivated successfully. Sep 10 00:49:40.903911 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:49:40.904320 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:49:40.905327 systemd-logind[1548]: Removed session 10. Sep 10 00:49:42.466325 containerd[1572]: time="2025-09-10T00:49:42.466262159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:42.467033 containerd[1572]: time="2025-09-10T00:49:42.466975828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 10 00:49:42.468400 containerd[1572]: time="2025-09-10T00:49:42.468336784Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:42.471164 containerd[1572]: time="2025-09-10T00:49:42.471112424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:42.471736 containerd[1572]: time="2025-09-10T00:49:42.471699071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.662388417s" Sep 10 00:49:42.471736 containerd[1572]: time="2025-09-10T00:49:42.471731937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:49:42.473146 containerd[1572]: time="2025-09-10T00:49:42.472980110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:49:42.474162 containerd[1572]: time="2025-09-10T00:49:42.474125170Z" level=info msg="CreateContainer within sandbox \"186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:49:42.488634 containerd[1572]: time="2025-09-10T00:49:42.488584332Z" level=info msg="CreateContainer within sandbox \"186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6435ced70ba6b1da7eef70d3314855b6f16bd815cd408e28a26de85f9a9d6960\"" Sep 10 00:49:42.489502 containerd[1572]: time="2025-09-10T00:49:42.489466573Z" level=info msg="StartContainer for \"6435ced70ba6b1da7eef70d3314855b6f16bd815cd408e28a26de85f9a9d6960\"" Sep 10 00:49:42.570829 containerd[1572]: time="2025-09-10T00:49:42.570772324Z" level=info msg="StartContainer for \"6435ced70ba6b1da7eef70d3314855b6f16bd815cd408e28a26de85f9a9d6960\" returns successfully" Sep 10 00:49:43.636275 kubelet[2656]: I0910 00:49:43.636220 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:49:45.908461 systemd[1]: Started sshd@10-10.0.0.156:22-10.0.0.1:45536.service - OpenSSH per-connection server daemon (10.0.0.1:45536). Sep 10 00:49:45.978816 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 45536 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:45.980496 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:45.984750 systemd-logind[1548]: New session 11 of user core. Sep 10 00:49:45.993527 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:49:46.279949 sshd[5400]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:46.284084 systemd[1]: sshd@10-10.0.0.156:22-10.0.0.1:45536.service: Deactivated successfully. Sep 10 00:49:46.286673 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:49:46.286807 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:49:46.288210 systemd-logind[1548]: Removed session 11. Sep 10 00:49:46.666456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740517657.mount: Deactivated successfully. Sep 10 00:49:47.883890 containerd[1572]: time="2025-09-10T00:49:47.883829848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:47.885008 containerd[1572]: time="2025-09-10T00:49:47.884905371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 10 00:49:47.886065 containerd[1572]: time="2025-09-10T00:49:47.886025962Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:47.889155 containerd[1572]: time="2025-09-10T00:49:47.889096786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:47.890055 containerd[1572]: time="2025-09-10T00:49:47.890007367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.41697789s" Sep 10 00:49:47.890109 containerd[1572]: time="2025-09-10T00:49:47.890056233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 10 00:49:47.891446 containerd[1572]: time="2025-09-10T00:49:47.890944690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:49:47.892549 containerd[1572]: time="2025-09-10T00:49:47.892472406Z" level=info msg="CreateContainer within sandbox \"84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:49:47.908330 containerd[1572]: time="2025-09-10T00:49:47.908259284Z" level=info msg="CreateContainer within sandbox \"84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ccdd9c5e9875cd3e77e7c5bfe507398222d9ecd32c6318de8bf9a10aea127821\"" Sep 10 00:49:47.909532 containerd[1572]: time="2025-09-10T00:49:47.909503848Z" level=info msg="StartContainer for \"ccdd9c5e9875cd3e77e7c5bfe507398222d9ecd32c6318de8bf9a10aea127821\"" Sep 10 00:49:47.985938 containerd[1572]: time="2025-09-10T00:49:47.985852500Z" level=info msg="StartContainer for \"ccdd9c5e9875cd3e77e7c5bfe507398222d9ecd32c6318de8bf9a10aea127821\" returns successfully" Sep 10 00:49:48.683163 containerd[1572]: time="2025-09-10T00:49:48.683108391Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:48.685143 containerd[1572]: time="2025-09-10T00:49:48.684975998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 00:49:48.689649 containerd[1572]: time="2025-09-10T00:49:48.689601650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 798.628556ms" Sep 10 00:49:48.689649 containerd[1572]: time="2025-09-10T00:49:48.689646492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:49:48.692589 containerd[1572]: time="2025-09-10T00:49:48.692561865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:49:48.693337 containerd[1572]: time="2025-09-10T00:49:48.693297069Z" level=info msg="CreateContainer within sandbox \"5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:49:48.697111 kubelet[2656]: I0910 00:49:48.696678 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-rwxfm" podStartSLOduration=37.760327354 podStartE2EDuration="44.696634344s" podCreationTimestamp="2025-09-10 00:49:04 +0000 UTC" firstStartedPulling="2025-09-10 00:49:35.536381144 +0000 UTC m=+46.509271365" lastFinishedPulling="2025-09-10 00:49:42.472688134 +0000 UTC m=+53.445578355" observedRunningTime="2025-09-10 00:49:42.647858627 +0000 UTC m=+53.620748848" watchObservedRunningTime="2025-09-10 00:49:48.696634344 +0000 UTC m=+59.669524565" Sep 10 00:49:48.698618 kubelet[2656]: I0910 00:49:48.697529 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-cvrzx" podStartSLOduration=30.895850831 podStartE2EDuration="40.697521417s" podCreationTimestamp="2025-09-10 00:49:08 +0000 UTC" firstStartedPulling="2025-09-10 00:49:38.08917515 +0000 UTC m=+49.062065361" lastFinishedPulling="2025-09-10 00:49:47.890845726 +0000 UTC m=+58.863735947" observedRunningTime="2025-09-10 00:49:48.696312636 +0000 UTC m=+59.669202857" watchObservedRunningTime="2025-09-10 00:49:48.697521417 +0000 UTC m=+59.670411638" Sep 10 00:49:48.722043 containerd[1572]: time="2025-09-10T00:49:48.721888772Z" level=info msg="CreateContainer within sandbox \"5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7c0b8cbbe49b8065125e710a348fcf2801ae4d2483a4f150d64a12e464970304\"" Sep 10 00:49:48.726282 containerd[1572]: time="2025-09-10T00:49:48.724050637Z" level=info msg="StartContainer for \"7c0b8cbbe49b8065125e710a348fcf2801ae4d2483a4f150d64a12e464970304\"" Sep 10 00:49:48.806769 containerd[1572]: time="2025-09-10T00:49:48.806719104Z" level=info msg="StartContainer for \"7c0b8cbbe49b8065125e710a348fcf2801ae4d2483a4f150d64a12e464970304\" returns successfully" Sep 10 00:49:48.906535 systemd[1]: run-containerd-runc-k8s.io-ccdd9c5e9875cd3e77e7c5bfe507398222d9ecd32c6318de8bf9a10aea127821-runc.9QPRyJ.mount: Deactivated successfully. Sep 10 00:49:49.108030 containerd[1572]: time="2025-09-10T00:49:49.107974355Z" level=info msg="StopPodSandbox for \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\"" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.153 [WARNING][5540] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"153fa74c-658f-4fb6-a953-ee6d15f43e30", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265", Pod:"coredns-7c65d6cfc9-7hl4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a7338993f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.153 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.153 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" iface="eth0" netns="" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.153 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.153 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.184 [INFO][5550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.184 [INFO][5550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.184 [INFO][5550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.192 [WARNING][5550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.192 [INFO][5550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.193 [INFO][5550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:49.202842 containerd[1572]: 2025-09-10 00:49:49.197 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.203336 containerd[1572]: time="2025-09-10T00:49:49.202892410Z" level=info msg="TearDown network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\" successfully" Sep 10 00:49:49.203336 containerd[1572]: time="2025-09-10T00:49:49.202924850Z" level=info msg="StopPodSandbox for \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\" returns successfully" Sep 10 00:49:49.222259 containerd[1572]: time="2025-09-10T00:49:49.222136793Z" level=info msg="RemovePodSandbox for \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\"" Sep 10 00:49:49.225298 containerd[1572]: time="2025-09-10T00:49:49.225236294Z" level=info msg="Forcibly stopping sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\"" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.266 [WARNING][5568] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"153fa74c-658f-4fb6-a953-ee6d15f43e30", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f06245453d196935eaf8c35bad7f16dc6eea0746a000cb47631205e8103c265", Pod:"coredns-7c65d6cfc9-7hl4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a7338993f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.267 [INFO][5568] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.268 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" iface="eth0" netns="" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.268 [INFO][5568] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.268 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.299 [INFO][5578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.299 [INFO][5578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.299 [INFO][5578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.305 [WARNING][5578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.306 [INFO][5578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" HandleID="k8s-pod-network.d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Workload="localhost-k8s-coredns--7c65d6cfc9--7hl4n-eth0" Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.307 [INFO][5578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:49.314258 containerd[1572]: 2025-09-10 00:49:49.311 [INFO][5568] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503" Sep 10 00:49:49.316679 containerd[1572]: time="2025-09-10T00:49:49.314871736Z" level=info msg="TearDown network for sandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\" successfully" Sep 10 00:49:49.327793 containerd[1572]: time="2025-09-10T00:49:49.327740172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:49.327855 containerd[1572]: time="2025-09-10T00:49:49.327831180Z" level=info msg="RemovePodSandbox \"d88c2cc25f27edfc0d0b0da4f41367b935af9b213457e4fa2fd911777c8a2503\" returns successfully" Sep 10 00:49:49.332595 containerd[1572]: time="2025-09-10T00:49:49.332531686Z" level=info msg="StopPodSandbox for \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\"" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.380 [WARNING][5596] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0", GenerateName:"calico-kube-controllers-7b4cc87785-", Namespace:"calico-system", SelfLink:"", UID:"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4cc87785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac", Pod:"calico-kube-controllers-7b4cc87785-mvnvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali611ec5823ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.380 [INFO][5596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.380 [INFO][5596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" iface="eth0" netns="" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.380 [INFO][5596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.380 [INFO][5596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.408 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.408 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.408 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.416 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.416 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.418 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:49.425141 containerd[1572]: 2025-09-10 00:49:49.421 [INFO][5596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.425141 containerd[1572]: time="2025-09-10T00:49:49.425103669Z" level=info msg="TearDown network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\" successfully" Sep 10 00:49:49.425141 containerd[1572]: time="2025-09-10T00:49:49.425140718Z" level=info msg="StopPodSandbox for \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\" returns successfully" Sep 10 00:49:49.428120 containerd[1572]: time="2025-09-10T00:49:49.425839729Z" level=info msg="RemovePodSandbox for \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\"" Sep 10 00:49:49.428120 containerd[1572]: time="2025-09-10T00:49:49.425890332Z" level=info msg="Forcibly stopping sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\"" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.467 [WARNING][5622] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0", GenerateName:"calico-kube-controllers-7b4cc87785-", Namespace:"calico-system", SelfLink:"", UID:"05560b5f-0c2f-48f7-b5ef-fa1beda53e1d", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b4cc87785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac", Pod:"calico-kube-controllers-7b4cc87785-mvnvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali611ec5823ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.467 [INFO][5622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.467 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" iface="eth0" netns="" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.467 [INFO][5622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.467 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.493 [INFO][5631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.493 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.493 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.499 [WARNING][5631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.499 [INFO][5631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" HandleID="k8s-pod-network.7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Workload="localhost-k8s-calico--kube--controllers--7b4cc87785--mvnvv-eth0" Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.500 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:49.507080 containerd[1572]: 2025-09-10 00:49:49.503 [INFO][5622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12" Sep 10 00:49:49.507568 containerd[1572]: time="2025-09-10T00:49:49.507127481Z" level=info msg="TearDown network for sandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\" successfully" Sep 10 00:49:49.512185 containerd[1572]: time="2025-09-10T00:49:49.512147462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:49.512288 containerd[1572]: time="2025-09-10T00:49:49.512234290Z" level=info msg="RemovePodSandbox \"7f597644739e0375f77716ad41300c1dd2f8b6327f715df737a53ad053e20c12\" returns successfully" Sep 10 00:49:49.512804 containerd[1572]: time="2025-09-10T00:49:49.512771625Z" level=info msg="StopPodSandbox for \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\"" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.553 [WARNING][5649] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cvrzx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"27842233-86b6-4b4f-9f34-c14a660239b3", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8", Pod:"goldmane-7988f88666-cvrzx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali050ff5cd593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.553 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.554 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" iface="eth0" netns="" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.554 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.554 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.575 [INFO][5658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.575 [INFO][5658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.575 [INFO][5658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.680 [WARNING][5658] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.681 [INFO][5658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.685 [INFO][5658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:49.693517 containerd[1572]: 2025-09-10 00:49:49.690 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:49.694213 containerd[1572]: time="2025-09-10T00:49:49.694161894Z" level=info msg="TearDown network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\" successfully" Sep 10 00:49:49.694388 containerd[1572]: time="2025-09-10T00:49:49.694328149Z" level=info msg="StopPodSandbox for \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\" returns successfully" Sep 10 00:49:49.694918 containerd[1572]: time="2025-09-10T00:49:49.694895509Z" level=info msg="RemovePodSandbox for \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\"" Sep 10 00:49:49.726039 containerd[1572]: time="2025-09-10T00:49:49.694924272Z" level=info msg="Forcibly stopping sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\"" Sep 10 00:49:50.012470 kubelet[2656]: I0910 00:49:50.012281 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4bb6c97d-crcn2" podStartSLOduration=35.45934138 podStartE2EDuration="46.012260843s" podCreationTimestamp="2025-09-10 00:49:04 +0000 UTC" firstStartedPulling="2025-09-10 00:49:38.138836467 +0000 UTC m=+49.111726688" lastFinishedPulling="2025-09-10 00:49:48.69175593 +0000 UTC m=+59.664646151" observedRunningTime="2025-09-10 00:49:50.01222035 +0000 UTC m=+60.985110581" watchObservedRunningTime="2025-09-10 00:49:50.012260843 +0000 UTC m=+60.985151065" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.945 [WARNING][5685] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cvrzx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"27842233-86b6-4b4f-9f34-c14a660239b3", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84be4d3fc8e8156f0ea6fa535528c26c8f1a363c08bc993032e2289d9dfbe4f8", Pod:"goldmane-7988f88666-cvrzx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali050ff5cd593", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.945 [INFO][5685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.945 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" iface="eth0" netns="" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.945 [INFO][5685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.945 [INFO][5685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.970 [INFO][5705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.970 [INFO][5705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:49.970 [INFO][5705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:50.009 [WARNING][5705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:50.009 [INFO][5705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" HandleID="k8s-pod-network.29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Workload="localhost-k8s-goldmane--7988f88666--cvrzx-eth0" Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:50.355 [INFO][5705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:50.364283 containerd[1572]: 2025-09-10 00:49:50.358 [INFO][5685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c" Sep 10 00:49:50.364283 containerd[1572]: time="2025-09-10T00:49:50.361986557Z" level=info msg="TearDown network for sandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\" successfully" Sep 10 00:49:50.690880 kubelet[2656]: I0910 00:49:50.690286 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:49:51.291492 systemd[1]: Started sshd@11-10.0.0.156:22-10.0.0.1:49048.service - OpenSSH per-connection server daemon (10.0.0.1:49048). Sep 10 00:49:51.331386 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 49048 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:51.333393 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:51.337862 containerd[1572]: time="2025-09-10T00:49:51.337819210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:51.337960 containerd[1572]: time="2025-09-10T00:49:51.337918422Z" level=info msg="RemovePodSandbox \"29e89b7d59309122dd0e6c1ea8f627650fe7f4df19ece9b4ddf31ea227eb425c\" returns successfully" Sep 10 00:49:51.338617 systemd-logind[1548]: New session 12 of user core. Sep 10 00:49:51.338934 containerd[1572]: time="2025-09-10T00:49:51.338609604Z" level=info msg="StopPodSandbox for \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\"" Sep 10 00:49:51.345701 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.692 [WARNING][5772] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"84107d46-758d-49e3-9857-f23097a54b8f", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896", Pod:"calico-apiserver-6d4bb6c97d-rwxfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali241eea38a0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.693 [INFO][5772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.693 [INFO][5772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" iface="eth0" netns="" Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.693 [INFO][5772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.693 [INFO][5772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.721 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.722 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.722 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.730 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.730 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.732 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:51.739208 containerd[1572]: 2025-09-10 00:49:51.736 [INFO][5772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:51.742343 containerd[1572]: time="2025-09-10T00:49:51.739281610Z" level=info msg="TearDown network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\" successfully" Sep 10 00:49:51.742343 containerd[1572]: time="2025-09-10T00:49:51.739315332Z" level=info msg="StopPodSandbox for \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\" returns successfully" Sep 10 00:49:51.742343 containerd[1572]: time="2025-09-10T00:49:51.739919162Z" level=info msg="RemovePodSandbox for \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\"" Sep 10 00:49:51.742343 containerd[1572]: time="2025-09-10T00:49:51.739963564Z" level=info msg="Forcibly stopping sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\"" Sep 10 00:49:51.745503 sshd[5760]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:51.757618 systemd[1]: Started sshd@12-10.0.0.156:22-10.0.0.1:49058.service - OpenSSH per-connection server daemon (10.0.0.1:49058). Sep 10 00:49:51.758127 systemd[1]: sshd@11-10.0.0.156:22-10.0.0.1:49048.service: Deactivated successfully. Sep 10 00:49:51.762450 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:49:51.762873 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:49:51.765231 systemd-logind[1548]: Removed session 12. Sep 10 00:49:51.788276 sshd[5822]: Accepted publickey for core from 10.0.0.1 port 49058 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:51.790035 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:51.794783 systemd-logind[1548]: New session 13 of user core. Sep 10 00:49:51.804527 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:49:52.498137 sshd[5822]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:52.505463 systemd[1]: Started sshd@13-10.0.0.156:22-10.0.0.1:49066.service - OpenSSH per-connection server daemon (10.0.0.1:49066). Sep 10 00:49:52.505990 systemd[1]: sshd@12-10.0.0.156:22-10.0.0.1:49058.service: Deactivated successfully. Sep 10 00:49:52.510819 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:49:52.511218 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:49:52.513685 systemd-logind[1548]: Removed session 13. Sep 10 00:49:52.536781 sshd[5871]: Accepted publickey for core from 10.0.0.1 port 49066 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:52.538115 sshd[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:52.542979 systemd-logind[1548]: New session 14 of user core. Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.177 [WARNING][5819] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"84107d46-758d-49e3-9857-f23097a54b8f", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"186d343086a6054ed2ad0c47852f433025b6e14a77ef0f07c1f7d6ac10d1c896", Pod:"calico-apiserver-6d4bb6c97d-rwxfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali241eea38a0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.178 [INFO][5819] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.178 [INFO][5819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" iface="eth0" netns="" Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.178 [INFO][5819] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.178 [INFO][5819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.200 [INFO][5842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.201 [INFO][5842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.201 [INFO][5842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.311 [WARNING][5842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.312 [INFO][5842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" HandleID="k8s-pod-network.43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--rwxfm-eth0" Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.521 [INFO][5842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:52.550939 containerd[1572]: 2025-09-10 00:49:52.527 [INFO][5819] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd" Sep 10 00:49:52.550939 containerd[1572]: time="2025-09-10T00:49:52.545381521Z" level=info msg="TearDown network for sandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\" successfully" Sep 10 00:49:52.548675 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:49:52.673792 sshd[5871]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:52.678138 systemd[1]: sshd@13-10.0.0.156:22-10.0.0.1:49066.service: Deactivated successfully. Sep 10 00:49:52.680954 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:49:52.681118 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:49:52.682309 systemd-logind[1548]: Removed session 14. Sep 10 00:49:52.716959 containerd[1572]: time="2025-09-10T00:49:52.716874590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:52.717228 containerd[1572]: time="2025-09-10T00:49:52.717032541Z" level=info msg="RemovePodSandbox \"43858d3f2e9ac00f66a69f0cc1deda2383b7074d0cfab40510f1a2f059c29bcd\" returns successfully" Sep 10 00:49:52.718665 containerd[1572]: time="2025-09-10T00:49:52.718254743Z" level=info msg="StopPodSandbox for \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\"" Sep 10 00:49:52.866389 systemd-resolved[1461]: Under memory pressure, flushing caches. Sep 10 00:49:52.868688 systemd-journald[1158]: Under memory pressure, flushing caches. Sep 10 00:49:52.866422 systemd-resolved[1461]: Flushed all caches. Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.877 [WARNING][5900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727", Pod:"coredns-7c65d6cfc9-mmgft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b2184f2fba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.877 [INFO][5900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.877 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" iface="eth0" netns="" Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.877 [INFO][5900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.877 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.952 [INFO][5909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.952 [INFO][5909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.952 [INFO][5909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.961 [WARNING][5909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.961 [INFO][5909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.964 [INFO][5909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:52.972746 containerd[1572]: 2025-09-10 00:49:52.968 [INFO][5900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:52.972746 containerd[1572]: time="2025-09-10T00:49:52.971471606Z" level=info msg="TearDown network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\" successfully" Sep 10 00:49:52.972746 containerd[1572]: time="2025-09-10T00:49:52.971497444Z" level=info msg="StopPodSandbox for \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\" returns successfully" Sep 10 00:49:52.972746 containerd[1572]: time="2025-09-10T00:49:52.972123298Z" level=info msg="RemovePodSandbox for \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\"" Sep 10 00:49:52.972746 containerd[1572]: time="2025-09-10T00:49:52.972173610Z" level=info msg="Forcibly stopping sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\"" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.015 [WARNING][5926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a6d2bfa2-402e-4e85-8163-cc4a99a0ed75", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f88a325b5b895b19bfb7aad5a08d7bf552165d49f958748208e6fafcdeaa0727", Pod:"coredns-7c65d6cfc9-mmgft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b2184f2fba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.015 [INFO][5926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.015 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" iface="eth0" netns="" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.015 [INFO][5926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.015 [INFO][5926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.041 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.041 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.041 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.059 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.059 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" HandleID="k8s-pod-network.572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Workload="localhost-k8s-coredns--7c65d6cfc9--mmgft-eth0" Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.061 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:53.068273 containerd[1572]: 2025-09-10 00:49:53.064 [INFO][5926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67" Sep 10 00:49:53.068767 containerd[1572]: time="2025-09-10T00:49:53.068308156Z" level=info msg="TearDown network for sandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\" successfully" Sep 10 00:49:53.148714 containerd[1572]: time="2025-09-10T00:49:53.148527451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:53.149189 containerd[1572]: time="2025-09-10T00:49:53.149162213Z" level=info msg="RemovePodSandbox \"572fd1f2381bb3fdef56b7725bb71753bb34df1feb9fcd6e79e7f09f98423b67\" returns successfully" Sep 10 00:49:53.150864 containerd[1572]: time="2025-09-10T00:49:53.149860502Z" level=info msg="StopPodSandbox for \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\"" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.197 [WARNING][5956] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77", Pod:"calico-apiserver-6d4bb6c97d-crcn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53281ce7ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.198 [INFO][5956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.198 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" iface="eth0" netns="" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.198 [INFO][5956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.198 [INFO][5956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.227 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.227 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.227 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.233 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.233 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.235 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:53.242988 containerd[1572]: 2025-09-10 00:49:53.239 [INFO][5956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.243602 containerd[1572]: time="2025-09-10T00:49:53.243047578Z" level=info msg="TearDown network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\" successfully" Sep 10 00:49:53.243602 containerd[1572]: time="2025-09-10T00:49:53.243085518Z" level=info msg="StopPodSandbox for \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\" returns successfully" Sep 10 00:49:53.251983 containerd[1572]: time="2025-09-10T00:49:53.251889011Z" level=info msg="RemovePodSandbox for \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\"" Sep 10 00:49:53.251983 containerd[1572]: time="2025-09-10T00:49:53.251956946Z" level=info msg="Forcibly stopping sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\"" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.294 [WARNING][5982] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0", GenerateName:"calico-apiserver-6d4bb6c97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d173b8ad-d1e2-4ab1-adf4-074e01bc5a59", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4bb6c97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cced0f8411e527ecd1623e079cd30cf515106a5d1540fae31af9f44b9569c77", Pod:"calico-apiserver-6d4bb6c97d-crcn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53281ce7ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.294 [INFO][5982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.294 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" iface="eth0" netns="" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.294 [INFO][5982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.294 [INFO][5982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.335 [INFO][5990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.335 [INFO][5990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.335 [INFO][5990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.343 [WARNING][5990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.343 [INFO][5990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" HandleID="k8s-pod-network.cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Workload="localhost-k8s-calico--apiserver--6d4bb6c97d--crcn2-eth0" Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.345 [INFO][5990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:53.351461 containerd[1572]: 2025-09-10 00:49:53.348 [INFO][5982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49" Sep 10 00:49:53.353335 containerd[1572]: time="2025-09-10T00:49:53.351989579Z" level=info msg="TearDown network for sandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\" successfully" Sep 10 00:49:53.356471 containerd[1572]: time="2025-09-10T00:49:53.356415119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:53.356540 containerd[1572]: time="2025-09-10T00:49:53.356512399Z" level=info msg="RemovePodSandbox \"cae0382fc00c8ff7d1f874b088ddfae70a14bbffbde4501fd612388e7f575b49\" returns successfully" Sep 10 00:49:53.357161 containerd[1572]: time="2025-09-10T00:49:53.357139697Z" level=info msg="StopPodSandbox for \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\"" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.399 [WARNING][6008] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wdgls-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dcf04d3-5b02-40dc-800c-0985ae063919", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781", Pod:"csi-node-driver-wdgls", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia587709d6e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.400 [INFO][6008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.400 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" iface="eth0" netns="" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.400 [INFO][6008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.400 [INFO][6008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.425 [INFO][6017] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.426 [INFO][6017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.426 [INFO][6017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.434 [WARNING][6017] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.436 [INFO][6017] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.441 [INFO][6017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:53.455039 containerd[1572]: 2025-09-10 00:49:53.451 [INFO][6008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.455484 containerd[1572]: time="2025-09-10T00:49:53.455053679Z" level=info msg="TearDown network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\" successfully" Sep 10 00:49:53.455484 containerd[1572]: time="2025-09-10T00:49:53.455082804Z" level=info msg="StopPodSandbox for \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\" returns successfully" Sep 10 00:49:53.455565 containerd[1572]: time="2025-09-10T00:49:53.455530680Z" level=info msg="RemovePodSandbox for \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\"" Sep 10 00:49:53.455565 containerd[1572]: time="2025-09-10T00:49:53.455562508Z" level=info msg="Forcibly stopping sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\"" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.504 [WARNING][6034] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wdgls-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0dcf04d3-5b02-40dc-800c-0985ae063919", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781", Pod:"csi-node-driver-wdgls", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia587709d6e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.504 [INFO][6034] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.504 [INFO][6034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" iface="eth0" netns="" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.504 [INFO][6034] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.504 [INFO][6034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.539 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.540 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.540 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.548 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.548 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" HandleID="k8s-pod-network.0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Workload="localhost-k8s-csi--node--driver--wdgls-eth0" Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.549 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:53.559074 containerd[1572]: 2025-09-10 00:49:53.553 [INFO][6034] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2" Sep 10 00:49:53.559074 containerd[1572]: time="2025-09-10T00:49:53.556922824Z" level=info msg="TearDown network for sandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\" successfully" Sep 10 00:49:53.573210 containerd[1572]: time="2025-09-10T00:49:53.573154710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:53.573361 containerd[1572]: time="2025-09-10T00:49:53.573257549Z" level=info msg="RemovePodSandbox \"0045f28aeb96d587a7f0b786b56ef08d6b431c350acc68e28727e0a702b6f3c2\" returns successfully" Sep 10 00:49:53.573810 containerd[1572]: time="2025-09-10T00:49:53.573776607Z" level=info msg="StopPodSandbox for \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\"" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.610 [WARNING][6062] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" WorkloadEndpoint="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.610 [INFO][6062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.610 [INFO][6062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" iface="eth0" netns="" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.610 [INFO][6062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.610 [INFO][6062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.637 [INFO][6071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.637 [INFO][6071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.637 [INFO][6071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.644 [WARNING][6071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.644 [INFO][6071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.645 [INFO][6071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:53.652362 containerd[1572]: 2025-09-10 00:49:53.648 [INFO][6062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.652720 containerd[1572]: time="2025-09-10T00:49:53.652405417Z" level=info msg="TearDown network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\" successfully" Sep 10 00:49:53.652720 containerd[1572]: time="2025-09-10T00:49:53.652433699Z" level=info msg="StopPodSandbox for \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\" returns successfully" Sep 10 00:49:53.653042 containerd[1572]: time="2025-09-10T00:49:53.652994555Z" level=info msg="RemovePodSandbox for \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\"" Sep 10 00:49:53.653042 containerd[1572]: time="2025-09-10T00:49:53.653038667Z" level=info msg="Forcibly stopping sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\"" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.691 [WARNING][6087] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" WorkloadEndpoint="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.691 [INFO][6087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.691 [INFO][6087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" iface="eth0" netns="" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.691 [INFO][6087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.691 [INFO][6087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.720 [INFO][6095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.720 [INFO][6095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.720 [INFO][6095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.809 [WARNING][6095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.809 [INFO][6095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" HandleID="k8s-pod-network.4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Workload="localhost-k8s-whisker--57d545fcc--n7lfj-eth0" Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.812 [INFO][6095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:49:53.826317 containerd[1572]: 2025-09-10 00:49:53.822 [INFO][6087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee" Sep 10 00:49:53.827162 containerd[1572]: time="2025-09-10T00:49:53.826372093Z" level=info msg="TearDown network for sandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\" successfully" Sep 10 00:49:53.988008 containerd[1572]: time="2025-09-10T00:49:53.987913059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:49:53.988553 containerd[1572]: time="2025-09-10T00:49:53.988024475Z" level=info msg="RemovePodSandbox \"4c61993b025578b580905b65ea33242b39009ab9ba5e3f256627f767bc0e3dee\" returns successfully" Sep 10 00:49:53.992659 containerd[1572]: time="2025-09-10T00:49:53.992607796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:53.993450 containerd[1572]: time="2025-09-10T00:49:53.993383809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 10 00:49:53.994867 containerd[1572]: time="2025-09-10T00:49:53.994814580Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:53.996907 containerd[1572]: time="2025-09-10T00:49:53.996862962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:53.997648 containerd[1572]: time="2025-09-10T00:49:53.997598821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.305004015s" Sep 10 00:49:53.997648 containerd[1572]: time="2025-09-10T00:49:53.997635268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 10 00:49:53.999146 containerd[1572]: time="2025-09-10T00:49:53.998891938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:49:54.008754 containerd[1572]: time="2025-09-10T00:49:54.008658271Z" level=info msg="CreateContainer within sandbox \"480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:49:54.045790 containerd[1572]: time="2025-09-10T00:49:54.045716718Z" level=info msg="CreateContainer within sandbox \"480c8e6e0469f6a4b73caa94768d791b5edad087b796f13db62c9ac7eab8f7ac\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f830f9a3b218c1305d4c1fb1ad66e283ae6cac8a6ac83143445e4056c3fcd679\"" Sep 10 00:49:54.047354 containerd[1572]: time="2025-09-10T00:49:54.046475642Z" level=info msg="StartContainer for \"f830f9a3b218c1305d4c1fb1ad66e283ae6cac8a6ac83143445e4056c3fcd679\"" Sep 10 00:49:54.178189 containerd[1572]: time="2025-09-10T00:49:54.177417608Z" level=info msg="StartContainer for \"f830f9a3b218c1305d4c1fb1ad66e283ae6cac8a6ac83143445e4056c3fcd679\" returns successfully" Sep 10 00:49:54.785941 kubelet[2656]: I0910 00:49:54.785813 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b4cc87785-mvnvv" podStartSLOduration=30.447663535 podStartE2EDuration="45.785783833s" podCreationTimestamp="2025-09-10 00:49:09 +0000 UTC" firstStartedPulling="2025-09-10 00:49:38.660379646 +0000 UTC m=+49.633269867" lastFinishedPulling="2025-09-10 00:49:53.998499944 +0000 UTC m=+64.971390165" observedRunningTime="2025-09-10 00:49:54.772786147 +0000 UTC m=+65.745676368" watchObservedRunningTime="2025-09-10 00:49:54.785783833 +0000 UTC m=+65.758674054" Sep 10 00:49:56.696978 containerd[1572]: time="2025-09-10T00:49:56.696910322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:56.719780 containerd[1572]: time="2025-09-10T00:49:56.719689647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 10 00:49:56.738564 containerd[1572]: time="2025-09-10T00:49:56.738494140Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:56.770992 containerd[1572]: time="2025-09-10T00:49:56.770922695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:49:56.771719 containerd[1572]: time="2025-09-10T00:49:56.771678087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.772734533s" Sep 10 00:49:56.771781 containerd[1572]: time="2025-09-10T00:49:56.771723882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 10 00:49:56.775626 containerd[1572]: time="2025-09-10T00:49:56.775569035Z" level=info msg="CreateContainer within sandbox \"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:49:56.895531 containerd[1572]: time="2025-09-10T00:49:56.895464402Z" level=info msg="CreateContainer within sandbox \"886981a714c6db14e9ba59413f219d588f09956c3a6bb5c377b1c058223ca781\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"02cbbd49c2db51dbfdb98297b92a16d180625f628d672bdf68e037b372747262\"" Sep 10 00:49:56.896551 containerd[1572]: time="2025-09-10T00:49:56.896496437Z" level=info msg="StartContainer for \"02cbbd49c2db51dbfdb98297b92a16d180625f628d672bdf68e037b372747262\"" Sep 10 00:49:56.993580 containerd[1572]: time="2025-09-10T00:49:56.993421955Z" level=info msg="StartContainer for \"02cbbd49c2db51dbfdb98297b92a16d180625f628d672bdf68e037b372747262\" returns successfully" Sep 10 00:49:57.688491 systemd[1]: Started sshd@14-10.0.0.156:22-10.0.0.1:49076.service - OpenSSH per-connection server daemon (10.0.0.1:49076). Sep 10 00:49:57.730952 sshd[6217]: Accepted publickey for core from 10.0.0.1 port 49076 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:49:57.733178 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:49:57.738425 systemd-logind[1548]: New session 15 of user core. Sep 10 00:49:57.744650 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:49:57.981690 sshd[6217]: pam_unix(sshd:session): session closed for user core Sep 10 00:49:57.985445 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:49:57.985736 systemd[1]: sshd@14-10.0.0.156:22-10.0.0.1:49076.service: Deactivated successfully. Sep 10 00:49:57.987980 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:49:57.988695 systemd-logind[1548]: Removed session 15. Sep 10 00:49:58.115309 kubelet[2656]: I0910 00:49:58.115224 2656 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:49:58.121133 kubelet[2656]: I0910 00:49:58.121070 2656 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:50:01.117783 kubelet[2656]: E0910 00:50:01.117702 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:02.117400 kubelet[2656]: E0910 00:50:02.117349 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:03.000531 systemd[1]: Started sshd@15-10.0.0.156:22-10.0.0.1:55320.service - OpenSSH per-connection server daemon (10.0.0.1:55320). Sep 10 00:50:03.033775 sshd[6236]: Accepted publickey for core from 10.0.0.1 port 55320 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:03.035595 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:03.039572 systemd-logind[1548]: New session 16 of user core. Sep 10 00:50:03.049497 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:50:03.187318 sshd[6236]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:03.192962 systemd[1]: sshd@15-10.0.0.156:22-10.0.0.1:55320.service: Deactivated successfully. Sep 10 00:50:03.195592 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:50:03.196103 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:50:03.197538 systemd-logind[1548]: Removed session 16. Sep 10 00:50:08.206672 systemd[1]: Started sshd@16-10.0.0.156:22-10.0.0.1:55322.service - OpenSSH per-connection server daemon (10.0.0.1:55322). Sep 10 00:50:08.235615 sshd[6251]: Accepted publickey for core from 10.0.0.1 port 55322 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:08.237484 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:08.242039 systemd-logind[1548]: New session 17 of user core. Sep 10 00:50:08.247571 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:50:08.395844 sshd[6251]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:08.401820 systemd[1]: sshd@16-10.0.0.156:22-10.0.0.1:55322.service: Deactivated successfully. Sep 10 00:50:08.404830 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:50:08.405130 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:50:08.406280 systemd-logind[1548]: Removed session 17. Sep 10 00:50:13.412561 systemd[1]: Started sshd@17-10.0.0.156:22-10.0.0.1:52136.service - OpenSSH per-connection server daemon (10.0.0.1:52136). Sep 10 00:50:13.495037 sshd[6273]: Accepted publickey for core from 10.0.0.1 port 52136 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:13.497426 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:13.506226 systemd-logind[1548]: New session 18 of user core. Sep 10 00:50:13.509531 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:50:13.655770 sshd[6273]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:13.659734 systemd[1]: sshd@17-10.0.0.156:22-10.0.0.1:52136.service: Deactivated successfully. Sep 10 00:50:13.662042 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:50:13.662087 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:50:13.663547 systemd-logind[1548]: Removed session 18. Sep 10 00:50:15.193622 kubelet[2656]: I0910 00:50:15.193106 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:50:15.320832 kubelet[2656]: I0910 00:50:15.320763 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wdgls" podStartSLOduration=42.979877747 podStartE2EDuration="1m6.320745307s" podCreationTimestamp="2025-09-10 00:49:09 +0000 UTC" firstStartedPulling="2025-09-10 00:49:33.431946374 +0000 UTC m=+44.404836595" lastFinishedPulling="2025-09-10 00:49:56.772813934 +0000 UTC m=+67.745704155" observedRunningTime="2025-09-10 00:49:57.755552887 +0000 UTC m=+68.728443128" watchObservedRunningTime="2025-09-10 00:50:15.320745307 +0000 UTC m=+86.293635528" Sep 10 00:50:18.667596 systemd[1]: Started sshd@18-10.0.0.156:22-10.0.0.1:52152.service - OpenSSH per-connection server daemon (10.0.0.1:52152). Sep 10 00:50:18.698886 sshd[6312]: Accepted publickey for core from 10.0.0.1 port 52152 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:18.700536 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:18.705153 systemd-logind[1548]: New session 19 of user core. Sep 10 00:50:18.710627 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:50:18.929313 sshd[6312]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:18.941852 systemd[1]: Started sshd@19-10.0.0.156:22-10.0.0.1:52164.service - OpenSSH per-connection server daemon (10.0.0.1:52164). Sep 10 00:50:18.943012 systemd[1]: sshd@18-10.0.0.156:22-10.0.0.1:52152.service: Deactivated successfully. Sep 10 00:50:18.946966 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:50:18.949533 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:50:18.950924 systemd-logind[1548]: Removed session 19. Sep 10 00:50:18.970338 sshd[6326]: Accepted publickey for core from 10.0.0.1 port 52164 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:18.972343 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:18.979296 systemd-logind[1548]: New session 20 of user core. Sep 10 00:50:18.985658 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:50:19.397591 sshd[6326]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:19.406475 systemd[1]: Started sshd@20-10.0.0.156:22-10.0.0.1:52176.service - OpenSSH per-connection server daemon (10.0.0.1:52176). Sep 10 00:50:19.411094 systemd[1]: sshd@19-10.0.0.156:22-10.0.0.1:52164.service: Deactivated successfully. Sep 10 00:50:19.416358 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:50:19.417112 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:50:19.419712 systemd-logind[1548]: Removed session 20. Sep 10 00:50:19.464189 sshd[6339]: Accepted publickey for core from 10.0.0.1 port 52176 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:19.466727 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:19.479035 systemd-logind[1548]: New session 21 of user core. Sep 10 00:50:19.484533 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:50:21.355399 sshd[6339]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:21.370412 systemd[1]: Started sshd@21-10.0.0.156:22-10.0.0.1:54926.service - OpenSSH per-connection server daemon (10.0.0.1:54926). Sep 10 00:50:21.372819 systemd[1]: sshd@20-10.0.0.156:22-10.0.0.1:52176.service: Deactivated successfully. Sep 10 00:50:21.378944 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:50:21.391906 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:50:21.393679 systemd-logind[1548]: Removed session 21. Sep 10 00:50:21.429265 sshd[6382]: Accepted publickey for core from 10.0.0.1 port 54926 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:21.431098 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:21.436401 systemd-logind[1548]: New session 22 of user core. Sep 10 00:50:21.445605 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:50:21.832438 sshd[6382]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:21.840282 kubelet[2656]: I0910 00:50:21.837862 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:50:21.842869 systemd[1]: Started sshd@22-10.0.0.156:22-10.0.0.1:54930.service - OpenSSH per-connection server daemon (10.0.0.1:54930). Sep 10 00:50:21.843805 systemd[1]: sshd@21-10.0.0.156:22-10.0.0.1:54926.service: Deactivated successfully. Sep 10 00:50:21.849207 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:50:21.851338 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:50:21.853433 systemd-logind[1548]: Removed session 22. Sep 10 00:50:21.888113 sshd[6396]: Accepted publickey for core from 10.0.0.1 port 54930 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:21.890595 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:21.896257 systemd-logind[1548]: New session 23 of user core. Sep 10 00:50:21.902913 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:50:22.052536 sshd[6396]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:22.060851 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:50:22.063130 systemd[1]: sshd@22-10.0.0.156:22-10.0.0.1:54930.service: Deactivated successfully. Sep 10 00:50:22.070466 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:50:22.077483 systemd-logind[1548]: Removed session 23. Sep 10 00:50:22.280952 systemd[1]: run-containerd-runc-k8s.io-ccdd9c5e9875cd3e77e7c5bfe507398222d9ecd32c6318de8bf9a10aea127821-runc.j9ETY6.mount: Deactivated successfully. Sep 10 00:50:24.117880 kubelet[2656]: E0910 00:50:24.117842 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:27.060559 systemd[1]: Started sshd@23-10.0.0.156:22-10.0.0.1:54942.service - OpenSSH per-connection server daemon (10.0.0.1:54942). Sep 10 00:50:27.097749 sshd[6461]: Accepted publickey for core from 10.0.0.1 port 54942 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:27.099492 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:27.103642 systemd-logind[1548]: New session 24 of user core. Sep 10 00:50:27.112530 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:50:27.245069 sshd[6461]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:27.250461 systemd[1]: sshd@23-10.0.0.156:22-10.0.0.1:54942.service: Deactivated successfully. Sep 10 00:50:27.253343 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:50:27.253684 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:50:27.255434 systemd-logind[1548]: Removed session 24. Sep 10 00:50:29.169292 kubelet[2656]: E0910 00:50:29.169203 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:30.117443 kubelet[2656]: E0910 00:50:30.117355 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:32.257533 systemd[1]: Started sshd@24-10.0.0.156:22-10.0.0.1:53166.service - OpenSSH per-connection server daemon (10.0.0.1:53166). Sep 10 00:50:32.292597 sshd[6499]: Accepted publickey for core from 10.0.0.1 port 53166 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:32.294481 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:32.301854 systemd-logind[1548]: New session 25 of user core. Sep 10 00:50:32.307520 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:50:32.546023 sshd[6499]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:32.550050 systemd[1]: sshd@24-10.0.0.156:22-10.0.0.1:53166.service: Deactivated successfully. Sep 10 00:50:32.553285 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:50:32.553385 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:50:32.554651 systemd-logind[1548]: Removed session 25. Sep 10 00:50:37.563691 systemd[1]: Started sshd@25-10.0.0.156:22-10.0.0.1:53168.service - OpenSSH per-connection server daemon (10.0.0.1:53168). Sep 10 00:50:37.598887 sshd[6515]: Accepted publickey for core from 10.0.0.1 port 53168 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:37.600835 sshd[6515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:37.605472 systemd-logind[1548]: New session 26 of user core. Sep 10 00:50:37.615531 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 00:50:37.823570 sshd[6515]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:37.827390 systemd[1]: sshd@25-10.0.0.156:22-10.0.0.1:53168.service: Deactivated successfully. Sep 10 00:50:37.832557 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:50:37.833615 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:50:37.835190 systemd-logind[1548]: Removed session 26. Sep 10 00:50:42.118165 kubelet[2656]: E0910 00:50:42.118114 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:50:42.835612 systemd[1]: Started sshd@26-10.0.0.156:22-10.0.0.1:50776.service - OpenSSH per-connection server daemon (10.0.0.1:50776). Sep 10 00:50:42.868161 sshd[6531]: Accepted publickey for core from 10.0.0.1 port 50776 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:50:42.869969 sshd[6531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:50:42.874952 systemd-logind[1548]: New session 27 of user core. Sep 10 00:50:42.885850 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 00:50:43.021280 sshd[6531]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:43.027003 systemd[1]: sshd@26-10.0.0.156:22-10.0.0.1:50776.service: Deactivated successfully. Sep 10 00:50:43.029899 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:50:43.030628 systemd-logind[1548]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:50:43.031857 systemd-logind[1548]: Removed session 27.