Sep 12 23:59:09.006322 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 12 23:59:09.006347 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 12 23:59:09.006359 kernel: BIOS-provided physical RAM map: Sep 12 23:59:09.006366 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 23:59:09.006374 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 23:59:09.006380 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 23:59:09.006390 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 23:59:09.006396 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 23:59:09.006402 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 23:59:09.006411 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 23:59:09.006418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 23:59:09.006424 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 23:59:09.006433 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 23:59:09.006440 kernel: NX (Execute Disable) protection: active Sep 12 23:59:09.006447 kernel: APIC: Static calls initialized Sep 12 23:59:09.006459 kernel: SMBIOS 2.8 present. Sep 12 23:59:09.006466 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 23:59:09.006473 kernel: Hypervisor detected: KVM Sep 12 23:59:09.006480 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 23:59:09.006487 kernel: kvm-clock: using sched offset of 3339152784 cycles Sep 12 23:59:09.006494 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 23:59:09.006502 kernel: tsc: Detected 2794.748 MHz processor Sep 12 23:59:09.006509 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 23:59:09.006516 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 23:59:09.006526 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 23:59:09.006533 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 23:59:09.006540 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 23:59:09.006547 kernel: Using GB pages for direct mapping Sep 12 23:59:09.006554 kernel: ACPI: Early table checksum verification disabled Sep 12 23:59:09.006561 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 23:59:09.006568 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:59:09.006575 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:59:09.006582 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:59:09.006592 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 23:59:09.006599 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:59:09.006606 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:59:09.006613 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:59:09.006620 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:59:09.006627 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 23:59:09.006634 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 23:59:09.006645 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 23:59:09.006654 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 23:59:09.006661 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 23:59:09.006669 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 23:59:09.006676 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 23:59:09.006685 kernel: No NUMA configuration found Sep 12 23:59:09.006692 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 23:59:09.006702 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 12 23:59:09.006709 kernel: Zone ranges: Sep 12 23:59:09.006717 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 23:59:09.006724 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 23:59:09.006731 kernel: Normal empty Sep 12 23:59:09.006738 kernel: Movable zone start for each node Sep 12 23:59:09.006745 kernel: Early memory node ranges Sep 12 23:59:09.006753 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 23:59:09.006760 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 23:59:09.006767 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 23:59:09.006777 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 23:59:09.006787 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 23:59:09.006794 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 23:59:09.006801 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 23:59:09.006809 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 23:59:09.006816 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 23:59:09.006823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 23:59:09.006831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 23:59:09.006838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 23:59:09.006848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 23:59:09.006856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 23:59:09.006863 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 23:59:09.006870 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 23:59:09.006877 kernel: TSC deadline timer available Sep 12 23:59:09.006885 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 23:59:09.006892 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 23:59:09.006899 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 23:59:09.006909 kernel: kvm-guest: setup PV sched yield Sep 12 23:59:09.006933 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 23:59:09.006952 kernel: Booting paravirtualized kernel on KVM Sep 12 23:59:09.006972 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 23:59:09.006983 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 23:59:09.006994 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 23:59:09.007004 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 23:59:09.007013 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 23:59:09.007023 kernel: kvm-guest: PV spinlocks enabled Sep 12 23:59:09.007032 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 23:59:09.007045 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 12 23:59:09.007053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:59:09.007061 kernel: random: crng init done Sep 12 23:59:09.007068 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:59:09.007075 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:59:09.007082 kernel: Fallback order for Node 0: 0 Sep 12 23:59:09.007090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 12 23:59:09.007097 kernel: Policy zone: DMA32 Sep 12 23:59:09.007107 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:59:09.007114 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 12 23:59:09.007122 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:59:09.007129 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 23:59:09.007136 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 23:59:09.007144 kernel: Dynamic Preempt: voluntary Sep 12 23:59:09.007151 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:59:09.007159 kernel: rcu: RCU event tracing is enabled. Sep 12 23:59:09.007166 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:59:09.007176 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:59:09.007184 kernel: Rude variant of Tasks RCU enabled. Sep 12 23:59:09.007191 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:59:09.007198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:59:09.007224 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:59:09.007235 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 23:59:09.007244 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:59:09.007254 kernel: Console: colour VGA+ 80x25 Sep 12 23:59:09.007263 kernel: printk: console [ttyS0] enabled Sep 12 23:59:09.007283 kernel: ACPI: Core revision 20230628 Sep 12 23:59:09.007290 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 23:59:09.007298 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 23:59:09.007305 kernel: x2apic enabled Sep 12 23:59:09.007313 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 23:59:09.007320 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 23:59:09.007328 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 23:59:09.007335 kernel: kvm-guest: setup PV IPIs Sep 12 23:59:09.007353 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 23:59:09.007370 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 23:59:09.007391 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 23:59:09.007411 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 23:59:09.007427 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 23:59:09.007455 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 23:59:09.007467 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 23:59:09.007477 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 23:59:09.007488 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 23:59:09.007503 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 23:59:09.007513 kernel: active return thunk: retbleed_return_thunk Sep 12 23:59:09.007523 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 23:59:09.007531 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 23:59:09.007539 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 23:59:09.007547 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 23:59:09.007555 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 23:59:09.007563 kernel: active return thunk: srso_return_thunk Sep 12 23:59:09.007573 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 23:59:09.007581 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 23:59:09.007589 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 23:59:09.007596 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 23:59:09.007604 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 23:59:09.007612 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 23:59:09.007619 kernel: Freeing SMP alternatives memory: 32K Sep 12 23:59:09.007627 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:59:09.007635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:59:09.007645 kernel: landlock: Up and running. Sep 12 23:59:09.007652 kernel: SELinux: Initializing. Sep 12 23:59:09.007660 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:59:09.007668 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:59:09.007676 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 23:59:09.007683 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:59:09.007691 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:59:09.007699 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:59:09.007709 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 23:59:09.007719 kernel: ... version: 0 Sep 12 23:59:09.007727 kernel: ... bit width: 48 Sep 12 23:59:09.007734 kernel: ... generic registers: 6 Sep 12 23:59:09.007742 kernel: ... value mask: 0000ffffffffffff Sep 12 23:59:09.007749 kernel: ... max period: 00007fffffffffff Sep 12 23:59:09.007757 kernel: ... fixed-purpose events: 0 Sep 12 23:59:09.007764 kernel: ... event mask: 000000000000003f Sep 12 23:59:09.007772 kernel: signal: max sigframe size: 1776 Sep 12 23:59:09.007779 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:59:09.007790 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:59:09.007797 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:59:09.007805 kernel: smpboot: x86: Booting SMP configuration: Sep 12 23:59:09.007812 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 23:59:09.007820 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:59:09.007827 kernel: smpboot: Max logical packages: 1 Sep 12 23:59:09.007835 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 23:59:09.007843 kernel: devtmpfs: initialized Sep 12 23:59:09.007850 kernel: x86/mm: Memory block size: 128MB Sep 12 23:59:09.007860 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:59:09.007868 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:59:09.007876 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:59:09.007883 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:59:09.007891 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:59:09.007899 kernel: audit: type=2000 audit(1757721548.411:1): state=initialized audit_enabled=0 res=1 Sep 12 23:59:09.007906 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:59:09.007914 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 23:59:09.007921 kernel: cpuidle: using governor menu Sep 12 23:59:09.007932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:59:09.007939 kernel: dca service started, version 1.12.1 Sep 12 23:59:09.007947 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 23:59:09.007955 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 23:59:09.007962 kernel: PCI: Using configuration type 1 for base access Sep 12 23:59:09.007970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 23:59:09.007978 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:59:09.007985 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:59:09.007993 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:59:09.008003 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:59:09.008011 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:59:09.008018 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:59:09.008026 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:59:09.008033 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:59:09.008041 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 23:59:09.008048 kernel: ACPI: Interpreter enabled Sep 12 23:59:09.008056 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 23:59:09.008063 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 23:59:09.008076 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 23:59:09.008083 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 23:59:09.008091 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 23:59:09.008099 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:59:09.008415 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:59:09.008562 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 23:59:09.008689 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 23:59:09.008704 kernel: PCI host bridge to bus 0000:00 Sep 12 23:59:09.008844 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 23:59:09.008961 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 23:59:09.009075 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 23:59:09.009191 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 23:59:09.009348 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 23:59:09.009472 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 23:59:09.009594 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:59:09.009753 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 23:59:09.009899 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 23:59:09.010028 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 12 23:59:09.010154 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 12 23:59:09.010317 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 12 23:59:09.010447 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 23:59:09.010615 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 23:59:09.010744 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 23:59:09.010871 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 12 23:59:09.010998 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 23:59:09.011144 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 23:59:09.011310 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 23:59:09.011442 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 12 23:59:09.011575 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 23:59:09.011720 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 23:59:09.011848 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 12 23:59:09.012106 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 12 23:59:09.012524 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 23:59:09.012752 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 12 23:59:09.012909 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 23:59:09.013051 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 23:59:09.013261 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 23:59:09.013410 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 12 23:59:09.013537 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 12 23:59:09.013676 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 23:59:09.013803 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 23:59:09.013819 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 23:59:09.013827 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 23:59:09.013835 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 23:59:09.013842 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 23:59:09.013850 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 23:59:09.013858 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 23:59:09.013869 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 23:59:09.013877 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 23:59:09.013884 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 23:59:09.013895 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 23:59:09.013902 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 23:59:09.013910 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 23:59:09.013918 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 23:59:09.013926 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 23:59:09.013933 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 23:59:09.013941 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 23:59:09.013948 kernel: iommu: Default domain type: Translated Sep 12 23:59:09.013956 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 23:59:09.013966 kernel: PCI: Using ACPI for IRQ routing Sep 12 23:59:09.013974 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 23:59:09.013981 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 23:59:09.013989 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 23:59:09.014117 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 23:59:09.014290 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 23:59:09.014434 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 23:59:09.014446 kernel: vgaarb: loaded Sep 12 23:59:09.014461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 23:59:09.014468 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 23:59:09.014476 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 23:59:09.014484 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:59:09.014492 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:59:09.014500 kernel: pnp: PnP ACPI init Sep 12 23:59:09.014651 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 23:59:09.014663 kernel: pnp: PnP ACPI: found 6 devices Sep 12 23:59:09.014674 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 23:59:09.014682 kernel: NET: Registered PF_INET protocol family Sep 12 23:59:09.014690 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:59:09.014698 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:59:09.014706 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:59:09.014713 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:59:09.014721 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:59:09.014729 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:59:09.014737 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:59:09.014747 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:59:09.014755 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:59:09.014762 kernel: NET: Registered PF_XDP protocol family Sep 12 23:59:09.014879 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 23:59:09.015070 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 23:59:09.015259 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 23:59:09.015397 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 23:59:09.015514 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 23:59:09.015638 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 23:59:09.015648 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:59:09.015656 kernel: Initialise system trusted keyrings Sep 12 23:59:09.015664 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:59:09.015672 kernel: Key type asymmetric registered Sep 12 23:59:09.015680 kernel: Asymmetric key parser 'x509' registered Sep 12 23:59:09.015687 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 23:59:09.015695 kernel: io scheduler mq-deadline registered Sep 12 23:59:09.015703 kernel: io scheduler kyber registered Sep 12 23:59:09.015714 kernel: io scheduler bfq registered Sep 12 23:59:09.015722 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 23:59:09.015730 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 23:59:09.015738 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 23:59:09.015746 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 23:59:09.015753 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:59:09.015761 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 23:59:09.015769 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 23:59:09.015777 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 23:59:09.015785 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 23:59:09.015942 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 23:59:09.015953 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 23:59:09.016072 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 23:59:09.016194 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T23:59:08 UTC (1757721548) Sep 12 23:59:09.016390 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 23:59:09.016404 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 23:59:09.016412 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:59:09.016425 kernel: Segment Routing with IPv6 Sep 12 23:59:09.016432 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:59:09.016440 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:59:09.016448 kernel: Key type dns_resolver registered Sep 12 23:59:09.016455 kernel: IPI shorthand broadcast: enabled Sep 12 23:59:09.016463 kernel: sched_clock: Marking stable (1120004482, 115521316)->(1280774930, -45249132) Sep 12 23:59:09.016471 kernel: registered taskstats version 1 Sep 12 23:59:09.016479 kernel: Loading compiled-in X.509 certificates Sep 12 23:59:09.016486 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 12 23:59:09.016497 kernel: Key type .fscrypt registered Sep 12 23:59:09.016504 kernel: Key type fscrypt-provisioning registered Sep 12 23:59:09.016512 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:59:09.016520 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:59:09.016527 kernel: ima: No architecture policies found Sep 12 23:59:09.016535 kernel: clk: Disabling unused clocks Sep 12 23:59:09.016543 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 23:59:09.016550 kernel: Write protecting the kernel read-only data: 36864k Sep 12 23:59:09.016558 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 23:59:09.016568 kernel: Run /init as init process Sep 12 23:59:09.016576 kernel: with arguments: Sep 12 23:59:09.016583 kernel: /init Sep 12 23:59:09.016591 kernel: with environment: Sep 12 23:59:09.016598 kernel: HOME=/ Sep 12 23:59:09.016606 kernel: TERM=linux Sep 12 23:59:09.016613 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:59:09.016623 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:59:09.016635 systemd[1]: Detected virtualization kvm. Sep 12 23:59:09.016644 systemd[1]: Detected architecture x86-64. Sep 12 23:59:09.016652 systemd[1]: Running in initrd. Sep 12 23:59:09.016660 systemd[1]: No hostname configured, using default hostname. Sep 12 23:59:09.016667 systemd[1]: Hostname set to . Sep 12 23:59:09.016676 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:59:09.016684 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:59:09.016692 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:59:09.016703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:59:09.016712 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:59:09.016733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:59:09.016744 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:59:09.016753 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:59:09.016766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:59:09.016774 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:59:09.016783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:59:09.016791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:59:09.016799 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:59:09.016808 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:59:09.016816 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:59:09.016825 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:59:09.016836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:59:09.016844 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:59:09.016852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:59:09.016861 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:59:09.016869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:59:09.016878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:59:09.016886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:59:09.016895 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:59:09.016903 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:59:09.016914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:59:09.016923 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:59:09.016931 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:59:09.016939 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:59:09.016948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:59:09.016956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:59:09.016964 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:59:09.016973 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:59:09.016984 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:59:09.016993 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:59:09.017020 systemd-journald[192]: Collecting audit messages is disabled. Sep 12 23:59:09.017041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:59:09.017050 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:59:09.017061 kernel: Bridge firewalling registered Sep 12 23:59:09.017069 systemd-journald[192]: Journal started Sep 12 23:59:09.017087 systemd-journald[192]: Runtime Journal (/run/log/journal/47b80d10b2f041a1a1fa8483b7ad7153) is 6.0M, max 48.4M, 42.3M free. Sep 12 23:59:08.987693 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 23:59:09.045594 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:59:09.015219 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 23:59:09.045885 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:59:09.052080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:59:09.068417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:59:09.070887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:59:09.072842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:59:09.076972 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:59:09.089078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:59:09.091828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:59:09.094764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:59:09.097104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:59:09.111451 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:59:09.115161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:59:09.128468 dracut-cmdline[228]: dracut-dracut-053 Sep 12 23:59:09.132810 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 12 23:59:09.164026 systemd-resolved[232]: Positive Trust Anchors: Sep 12 23:59:09.164046 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:59:09.164086 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:59:09.167767 systemd-resolved[232]: Defaulting to hostname 'linux'. Sep 12 23:59:09.169324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:59:09.180706 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:59:09.303318 kernel: SCSI subsystem initialized Sep 12 23:59:09.314285 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:59:09.360243 kernel: iscsi: registered transport (tcp) Sep 12 23:59:09.383252 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:59:09.383343 kernel: QLogic iSCSI HBA Driver Sep 12 23:59:09.455185 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:59:09.465458 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:59:09.503110 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:59:09.503277 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:59:09.503293 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:59:09.555346 kernel: raid6: avx2x4 gen() 27848 MB/s Sep 12 23:59:09.572263 kernel: raid6: avx2x2 gen() 28253 MB/s Sep 12 23:59:09.589500 kernel: raid6: avx2x1 gen() 14216 MB/s Sep 12 23:59:09.589660 kernel: raid6: using algorithm avx2x2 gen() 28253 MB/s Sep 12 23:59:09.608444 kernel: raid6: .... xor() 13752 MB/s, rmw enabled Sep 12 23:59:09.608553 kernel: raid6: using avx2x2 recovery algorithm Sep 12 23:59:09.639286 kernel: xor: automatically using best checksumming function avx Sep 12 23:59:09.820247 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:59:09.835649 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:59:09.850425 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:59:09.866236 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 12 23:59:09.871756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:59:09.880538 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:59:09.900503 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Sep 12 23:59:09.951322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:59:09.957603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:59:10.037021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:59:10.049435 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:59:10.064096 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:59:10.067580 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:59:10.068932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:59:10.069277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:59:10.082364 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:59:10.084278 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 23:59:10.089471 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:59:10.090428 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:59:10.096566 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:59:10.096597 kernel: GPT:9289727 != 19775487 Sep 12 23:59:10.096612 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:59:10.097598 kernel: GPT:9289727 != 19775487 Sep 12 23:59:10.097620 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:59:10.099284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:59:10.101230 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 23:59:10.122766 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:59:10.122903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:59:10.127710 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:59:10.131528 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 23:59:10.132743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:59:10.138658 kernel: AES CTR mode by8 optimization enabled Sep 12 23:59:10.133175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:59:10.173889 kernel: libata version 3.00 loaded. Sep 12 23:59:10.137310 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:59:10.178224 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (461) Sep 12 23:59:10.178275 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Sep 12 23:59:10.184313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:59:10.188761 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 23:59:10.189049 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 23:59:10.190760 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 23:59:10.190940 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 23:59:10.197268 kernel: scsi host0: ahci Sep 12 23:59:10.198293 kernel: scsi host1: ahci Sep 12 23:59:10.199227 kernel: scsi host2: ahci Sep 12 23:59:10.201325 kernel: scsi host3: ahci Sep 12 23:59:10.201525 kernel: scsi host4: ahci Sep 12 23:59:10.204045 kernel: scsi host5: ahci Sep 12 23:59:10.204269 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 12 23:59:10.204282 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 12 23:59:10.203960 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:59:10.242026 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 12 23:59:10.242058 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 12 23:59:10.242069 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 12 23:59:10.242079 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 12 23:59:10.217751 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:59:10.247292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:59:10.256510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:59:10.256837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:59:10.264901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:59:10.280535 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:59:10.283846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:59:10.324162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:59:10.372721 disk-uuid[561]: Primary Header is updated. Sep 12 23:59:10.372721 disk-uuid[561]: Secondary Entries is updated. Sep 12 23:59:10.372721 disk-uuid[561]: Secondary Header is updated. Sep 12 23:59:10.377228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:59:10.382260 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:59:10.517268 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 23:59:10.517352 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 23:59:10.518280 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 23:59:10.519318 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 23:59:10.520275 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 23:59:10.521605 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 23:59:10.521628 kernel: ata3.00: applying bridge limits Sep 12 23:59:10.522248 kernel: ata3.00: configured for UDMA/100 Sep 12 23:59:10.524247 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 23:59:10.527238 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 23:59:10.578504 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 23:59:10.579014 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:59:10.597318 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 23:59:11.451144 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:59:11.451692 disk-uuid[571]: The operation has completed successfully. Sep 12 23:59:11.491051 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:59:11.491279 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:59:11.529479 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:59:11.534246 sh[594]: Success Sep 12 23:59:11.551065 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 23:59:11.589360 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:59:11.603255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:59:11.609546 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:59:11.694682 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 12 23:59:11.694734 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:59:11.694746 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:59:11.695679 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:59:11.696394 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:59:11.702112 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:59:11.703280 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:59:11.714411 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:59:11.717891 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:59:11.727699 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 12 23:59:11.727746 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:59:11.727757 kernel: BTRFS info (device vda6): using free space tree Sep 12 23:59:11.732247 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 23:59:11.742297 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:59:11.743893 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 12 23:59:11.767914 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:59:11.775407 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:59:11.873254 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:59:12.015983 ignition[696]: Ignition 2.19.0 Sep 12 23:59:12.015997 ignition[696]: Stage: fetch-offline Sep 12 23:59:12.018174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:59:12.016096 ignition[696]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:59:12.016112 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:59:12.016357 ignition[696]: parsed url from cmdline: "" Sep 12 23:59:12.016363 ignition[696]: no config URL provided Sep 12 23:59:12.016371 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:59:12.016389 ignition[696]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:59:12.016449 ignition[696]: op(1): [started] loading QEMU firmware config module Sep 12 23:59:12.016457 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:59:12.031461 ignition[696]: op(1): [finished] loading QEMU firmware config module Sep 12 23:59:12.047142 systemd-networkd[781]: lo: Link UP Sep 12 23:59:12.047153 systemd-networkd[781]: lo: Gained carrier Sep 12 23:59:12.049861 systemd-networkd[781]: Enumeration completed Sep 12 23:59:12.050121 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:59:12.050611 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:59:12.050616 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:59:12.050662 systemd[1]: Reached target network.target - Network. Sep 12 23:59:12.052965 systemd-networkd[781]: eth0: Link UP Sep 12 23:59:12.052973 systemd-networkd[781]: eth0: Gained carrier Sep 12 23:59:12.052982 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:59:12.079268 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:59:12.087479 ignition[696]: parsing config with SHA512: 949a9a25e47a3404d2509fbd6063e94986ba3a39f2ffecda677a1e24c581d5556eb8ae5415cdeaa826e6e94d85a4f36bd534adf6675e771037a3be043fa20db4 Sep 12 23:59:12.094159 unknown[696]: fetched base config from "system" Sep 12 23:59:12.094616 ignition[696]: fetch-offline: fetch-offline passed Sep 12 23:59:12.094173 unknown[696]: fetched user config from "qemu" Sep 12 23:59:12.094682 ignition[696]: Ignition finished successfully Sep 12 23:59:12.097474 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:59:12.099379 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:59:12.108424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:59:12.122865 ignition[786]: Ignition 2.19.0 Sep 12 23:59:12.122877 ignition[786]: Stage: kargs Sep 12 23:59:12.123061 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:59:12.123073 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:59:12.123924 ignition[786]: kargs: kargs passed Sep 12 23:59:12.123972 ignition[786]: Ignition finished successfully Sep 12 23:59:12.131136 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:59:12.143471 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:59:12.168489 ignition[793]: Ignition 2.19.0 Sep 12 23:59:12.168503 ignition[793]: Stage: disks Sep 12 23:59:12.168728 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:59:12.168745 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:59:12.170051 ignition[793]: disks: disks passed Sep 12 23:59:12.170112 ignition[793]: Ignition finished successfully Sep 12 23:59:12.175897 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:59:12.176802 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:59:12.178603 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:59:12.180510 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:59:12.183134 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:59:12.183524 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:59:12.193579 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:59:12.210445 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 23:59:12.217396 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:59:12.223304 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:59:12.324239 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 12 23:59:12.324808 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:59:12.326312 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:59:12.334316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:59:12.336249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:59:12.337341 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:59:12.337385 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:59:12.345348 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Sep 12 23:59:12.337409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:59:12.350684 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 12 23:59:12.350708 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:59:12.350722 kernel: BTRFS info (device vda6): using free space tree Sep 12 23:59:12.350736 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 23:59:12.345422 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:59:12.358367 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:59:12.360661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:59:12.396071 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:59:12.400826 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:59:12.405152 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:59:12.409554 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:59:12.558427 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.22 Sep 12 23:59:12.558454 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Sep 12 23:59:12.573487 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:59:12.584349 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:59:12.603676 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:59:12.617286 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 12 23:59:12.633113 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:59:12.651569 ignition[925]: INFO : Ignition 2.19.0 Sep 12 23:59:12.651569 ignition[925]: INFO : Stage: mount Sep 12 23:59:12.653568 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:59:12.653568 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:59:12.653568 ignition[925]: INFO : mount: mount passed Sep 12 23:59:12.653568 ignition[925]: INFO : Ignition finished successfully Sep 12 23:59:12.658189 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:59:12.671443 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:59:12.693497 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:59:12.706605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:59:12.714901 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Sep 12 23:59:12.714937 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 12 23:59:12.714949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:59:12.715789 kernel: BTRFS info (device vda6): using free space tree Sep 12 23:59:12.719231 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 23:59:12.721510 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:59:12.753411 ignition[954]: INFO : Ignition 2.19.0 Sep 12 23:59:12.753411 ignition[954]: INFO : Stage: files Sep 12 23:59:12.755267 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:59:12.755267 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:59:12.755267 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:59:12.759148 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:59:12.759148 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:59:12.763300 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:59:12.765096 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:59:12.766837 unknown[954]: wrote ssh authorized keys file for user: core Sep 12 23:59:12.768025 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:59:12.770373 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:59:12.772214 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:59:12.773867 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 23:59:12.775694 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 23:59:13.030130 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:59:13.308836 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 23:59:13.308836 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:59:13.313323 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 23:59:13.541474 systemd-networkd[781]: eth0: Gained IPv6LL Sep 12 23:59:13.735482 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:59:14.720838 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:59:14.720838 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 23:59:14.724772 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:59:14.727468 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:59:14.727468 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 23:59:14.727468 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 23:59:14.732191 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:59:14.732191 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:59:14.732191 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 23:59:14.732191 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 12 23:59:14.738369 ignition[954]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:59:14.738369 ignition[954]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:59:14.738369 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 12 23:59:14.738369 ignition[954]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:59:14.790391 ignition[954]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:59:14.798450 ignition[954]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:59:14.800267 ignition[954]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:59:14.801729 ignition[954]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:59:14.803159 ignition[954]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:59:14.804925 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:59:14.807113 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:59:14.809056 ignition[954]: INFO : files: files passed Sep 12 23:59:14.809876 ignition[954]: INFO : Ignition finished successfully Sep 12 23:59:14.813660 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:59:14.830408 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:59:14.832833 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:59:14.835568 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:59:14.835713 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:59:14.849779 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:59:14.854317 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:59:14.854317 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:59:14.857648 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:59:14.861472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:59:14.864695 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:59:14.884443 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:59:14.918959 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:59:14.919150 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:59:14.921992 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:59:14.923530 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:59:14.925735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:59:14.927242 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:59:14.959336 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:59:14.976551 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:59:14.986241 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:59:14.988877 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:59:14.991230 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:59:14.993008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:59:14.994055 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:59:14.996651 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:59:14.998689 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:59:15.000489 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:59:15.002678 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:59:15.004950 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:59:15.007123 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:59:15.009302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:59:15.011764 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:59:15.014045 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:59:15.016225 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:59:15.017968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:59:15.019088 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:59:15.021607 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:59:15.024236 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:59:15.026595 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:59:15.027728 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:59:15.030741 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:59:15.032005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:59:15.034606 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:59:15.035805 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:59:15.038361 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:59:15.040170 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:59:15.041411 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:59:15.044544 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:59:15.046514 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:59:15.048412 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:59:15.049307 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:59:15.051278 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:59:15.052186 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:59:15.054391 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:59:15.055609 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:59:15.058303 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:59:15.059363 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:59:15.080580 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:59:15.082886 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:59:15.084239 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:59:15.088372 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:59:15.090158 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:59:15.090312 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:59:15.092907 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:59:15.094300 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:59:15.100279 ignition[1009]: INFO : Ignition 2.19.0 Sep 12 23:59:15.100279 ignition[1009]: INFO : Stage: umount Sep 12 23:59:15.103026 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:59:15.103026 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:59:15.103026 ignition[1009]: INFO : umount: umount passed Sep 12 23:59:15.103026 ignition[1009]: INFO : Ignition finished successfully Sep 12 23:59:15.103954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:59:15.104084 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:59:15.106001 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:59:15.106147 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:59:15.109966 systemd[1]: Stopped target network.target - Network. Sep 12 23:59:15.111671 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:59:15.111734 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:59:15.113998 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:59:15.114054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:59:15.116023 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:59:15.116074 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:59:15.117949 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:59:15.118004 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:59:15.120066 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:59:15.122085 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:59:15.123250 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 12 23:59:15.125992 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:59:15.127447 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:59:15.127599 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:59:15.130359 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:59:15.130415 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:59:15.140397 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:59:15.141402 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:59:15.141493 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:59:15.144025 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:59:15.146643 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:59:15.146795 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:59:15.160714 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:59:15.161491 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:59:15.162255 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:59:15.162310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:59:15.164726 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:59:15.164777 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:59:15.170507 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:59:15.170637 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:59:15.174237 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:59:15.174441 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:59:15.175045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:59:15.175110 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:59:15.177668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:59:15.177711 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:59:15.179549 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:59:15.179604 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:59:15.181970 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:59:15.182024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:59:15.182765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:59:15.182813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:59:15.197342 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:59:15.197800 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:59:15.197859 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:59:15.200163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:59:15.200230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:59:15.222189 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:59:15.222330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:59:15.299525 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:59:15.299691 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:59:15.302407 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:59:15.303582 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:59:15.303650 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:59:15.318427 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:59:15.327113 systemd[1]: Switching root. Sep 12 23:59:15.354084 systemd-journald[192]: Journal stopped Sep 12 23:59:17.119674 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 12 23:59:17.119767 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:59:17.119795 kernel: SELinux: policy capability open_perms=1 Sep 12 23:59:17.119806 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:59:17.119821 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:59:17.119841 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:59:17.119858 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:59:17.119870 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:59:17.119882 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:59:17.119894 kernel: audit: type=1403 audit(1757721556.262:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:59:17.119913 systemd[1]: Successfully loaded SELinux policy in 61.148ms. Sep 12 23:59:17.119939 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.472ms. Sep 12 23:59:17.119959 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:59:17.119971 systemd[1]: Detected virtualization kvm. Sep 12 23:59:17.119989 systemd[1]: Detected architecture x86-64. Sep 12 23:59:17.120001 systemd[1]: Detected first boot. Sep 12 23:59:17.120013 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:59:17.120028 zram_generator::config[1075]: No configuration found. Sep 12 23:59:17.120055 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:59:17.120071 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:59:17.120084 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:59:17.120098 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:59:17.120117 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:59:17.120132 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:59:17.120144 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:59:17.120157 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:59:17.120169 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:59:17.120181 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:59:17.120194 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:59:17.120252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:59:17.120271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:59:17.120284 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:59:17.120296 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:59:17.120309 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:59:17.120321 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:59:17.120334 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:59:17.120347 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:59:17.120372 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:59:17.120386 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:59:17.120404 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:59:17.120416 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:59:17.120429 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:59:17.120441 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:59:17.120464 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:59:17.120487 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:59:17.120503 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:59:17.120514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:59:17.120526 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:59:17.120543 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:59:17.120557 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:59:17.120568 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:59:17.120581 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:59:17.120593 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:59:17.120605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:59:17.120618 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:59:17.120630 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:59:17.120647 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:59:17.120660 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:59:17.120675 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:59:17.120688 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:59:17.120701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:59:17.120713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:59:17.120725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:59:17.120737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:59:17.120750 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:59:17.120768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:59:17.120781 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:59:17.120794 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 23:59:17.120807 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 23:59:17.120829 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:59:17.120844 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:59:17.120856 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:59:17.120868 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:59:17.120886 kernel: loop: module loaded Sep 12 23:59:17.120898 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:59:17.120911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:59:17.120925 kernel: ACPI: bus type drm_connector registered Sep 12 23:59:17.120942 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:59:17.120959 kernel: fuse: init (API version 7.39) Sep 12 23:59:17.120971 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:59:17.120982 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:59:17.121029 systemd-journald[1149]: Collecting audit messages is disabled. Sep 12 23:59:17.121074 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:59:17.121092 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:59:17.121106 systemd-journald[1149]: Journal started Sep 12 23:59:17.121134 systemd-journald[1149]: Runtime Journal (/run/log/journal/47b80d10b2f041a1a1fa8483b7ad7153) is 6.0M, max 48.4M, 42.3M free. Sep 12 23:59:17.122594 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:59:17.125016 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:59:17.126826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:59:17.129695 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:59:17.129955 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:59:17.132120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:59:17.132449 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:59:17.134302 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:59:17.134597 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:59:17.136228 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:59:17.138055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:59:17.138356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:59:17.140439 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:59:17.140716 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:59:17.142481 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:59:17.142781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:59:17.144885 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:59:17.146808 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:59:17.149089 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:59:17.165293 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:59:17.178366 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:59:17.181148 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:59:17.182892 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:59:17.188429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:59:17.192517 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:59:17.197017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:59:17.199052 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:59:17.379154 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:59:17.387472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:59:17.393397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:59:17.397165 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:59:17.397478 systemd-journald[1149]: Time spent on flushing to /var/log/journal/47b80d10b2f041a1a1fa8483b7ad7153 is 45.685ms for 942 entries. Sep 12 23:59:17.397478 systemd-journald[1149]: System Journal (/var/log/journal/47b80d10b2f041a1a1fa8483b7ad7153) is 8.0M, max 195.6M, 187.6M free. Sep 12 23:59:17.745690 systemd-journald[1149]: Received client request to flush runtime journal. Sep 12 23:59:17.401069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:59:17.402831 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:59:17.410490 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:59:17.429428 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:59:17.472635 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:59:17.489244 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:59:17.679455 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 12 23:59:17.679470 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 12 23:59:17.682358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:59:17.685537 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:59:17.693468 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:59:17.720719 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:59:17.731996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:59:17.748069 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:59:17.754086 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 12 23:59:17.754102 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 12 23:59:17.768892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:59:18.998059 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:59:19.009521 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:59:19.042768 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Sep 12 23:59:19.068752 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:59:19.080340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:59:19.094443 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:59:19.117314 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1250) Sep 12 23:59:19.236093 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 23:59:19.253061 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:59:19.285640 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:59:19.305233 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 23:59:19.317883 kernel: ACPI: button: Power Button [PWRF] Sep 12 23:59:19.346434 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 23:59:19.346788 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 23:59:19.347065 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 23:59:19.371226 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 23:59:19.398563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:59:19.422132 systemd-networkd[1247]: lo: Link UP Sep 12 23:59:19.422147 systemd-networkd[1247]: lo: Gained carrier Sep 12 23:59:19.425057 systemd-networkd[1247]: Enumeration completed Sep 12 23:59:19.425369 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:59:19.425738 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:59:19.425744 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:59:19.429817 systemd-networkd[1247]: eth0: Link UP Sep 12 23:59:19.429833 systemd-networkd[1247]: eth0: Gained carrier Sep 12 23:59:19.429859 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:59:19.464243 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:59:19.483667 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:59:19.490368 systemd-networkd[1247]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:59:19.501262 kernel: kvm_amd: TSC scaling supported Sep 12 23:59:19.501446 kernel: kvm_amd: Nested Virtualization enabled Sep 12 23:59:19.501480 kernel: kvm_amd: Nested Paging enabled Sep 12 23:59:19.501513 kernel: kvm_amd: LBR virtualization supported Sep 12 23:59:19.501542 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 23:59:19.501585 kernel: kvm_amd: Virtual GIF supported Sep 12 23:59:19.527244 kernel: EDAC MC: Ver: 3.0.0 Sep 12 23:59:19.540937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:59:19.565997 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:59:19.577354 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:59:19.589185 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:59:19.624027 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:59:19.625908 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:59:19.638446 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:59:19.646343 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:59:19.686349 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:59:19.688428 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:59:19.689955 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:59:19.690013 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:59:19.691122 systemd[1]: Reached target machines.target - Containers. Sep 12 23:59:19.693602 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:59:19.709544 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:59:19.713060 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:59:19.714545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:59:19.715729 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:59:19.718597 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:59:19.721751 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:59:19.725128 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:59:19.735633 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:59:19.738757 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 23:59:19.752444 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:59:19.753369 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:59:19.768315 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:59:19.795244 kernel: loop1: detected capacity change from 0 to 140768 Sep 12 23:59:19.862233 kernel: loop2: detected capacity change from 0 to 221472 Sep 12 23:59:19.913244 kernel: loop3: detected capacity change from 0 to 142488 Sep 12 23:59:19.927251 kernel: loop4: detected capacity change from 0 to 140768 Sep 12 23:59:19.938322 kernel: loop5: detected capacity change from 0 to 221472 Sep 12 23:59:19.944188 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:59:19.944985 (sd-merge)[1307]: Merged extensions into '/usr'. Sep 12 23:59:19.952817 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:59:19.952832 systemd[1]: Reloading... Sep 12 23:59:20.032229 zram_generator::config[1335]: No configuration found. Sep 12 23:59:20.123671 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:59:20.205646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:20.281868 systemd[1]: Reloading finished in 328 ms. Sep 12 23:59:20.302755 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:59:20.304476 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:59:20.330477 systemd[1]: Starting ensure-sysext.service... Sep 12 23:59:20.333143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:59:20.339671 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:59:20.339691 systemd[1]: Reloading... Sep 12 23:59:20.364267 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:59:20.364826 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:59:20.366268 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:59:20.366723 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Sep 12 23:59:20.366833 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Sep 12 23:59:20.376359 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:59:20.376376 systemd-tmpfiles[1380]: Skipping /boot Sep 12 23:59:20.396910 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:59:20.397112 systemd-tmpfiles[1380]: Skipping /boot Sep 12 23:59:20.405269 zram_generator::config[1410]: No configuration found. Sep 12 23:59:20.552329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:20.631335 systemd[1]: Reloading finished in 291 ms. Sep 12 23:59:20.654506 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:59:20.681950 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:59:20.685328 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:59:20.688695 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:59:20.693924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:59:20.699633 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:59:20.706327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:59:20.706909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:59:20.708601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:59:20.712125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:59:20.718532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:59:20.721437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:59:20.721552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:59:20.722864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:59:20.723117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:59:20.728246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:59:20.728497 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:59:20.740968 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:59:20.743802 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:59:20.744135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:59:20.748743 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:59:20.751862 augenrules[1483]: No rules Sep 12 23:59:20.754054 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:59:20.761048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:59:20.761430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:59:20.766519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:59:20.772543 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:59:20.777431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:59:20.781161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:59:20.782566 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:59:20.795672 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:59:20.797046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:59:20.799127 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:59:20.801905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:59:20.802250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:59:20.804424 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:59:20.804704 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:59:20.806759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:59:20.807037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:59:20.807246 systemd-resolved[1459]: Positive Trust Anchors: Sep 12 23:59:20.807265 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:59:20.807295 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:59:20.809193 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:59:20.809512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:59:20.811738 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:59:20.812432 systemd-resolved[1459]: Defaulting to hostname 'linux'. Sep 12 23:59:20.815846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:59:20.817975 systemd[1]: Finished ensure-sysext.service. Sep 12 23:59:20.826445 systemd[1]: Reached target network.target - Network. Sep 12 23:59:20.827438 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:59:20.828814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:59:20.828901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:59:20.842381 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:59:20.843618 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:59:20.901467 systemd-networkd[1247]: eth0: Gained IPv6LL Sep 12 23:59:20.905557 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:59:20.907402 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:59:20.917687 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:59:21.919517 systemd-resolved[1459]: Clock change detected. Flushing caches. Sep 12 23:59:21.919568 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:59:21.919620 systemd-timesyncd[1516]: Initial clock synchronization to Fri 2025-09-12 23:59:21.919445 UTC. Sep 12 23:59:21.920504 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:59:21.921741 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:59:21.923125 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:59:21.924434 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:59:21.925726 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:59:21.925784 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:59:21.926747 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:59:21.928113 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:59:21.929369 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:59:21.930684 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:59:21.932662 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:59:21.935902 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:59:21.938619 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:59:21.945566 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:59:21.946940 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:59:21.948134 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:59:21.949533 systemd[1]: System is tainted: cgroupsv1 Sep 12 23:59:21.949598 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:59:21.949634 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:59:21.951539 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:59:21.954163 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:59:21.958250 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:59:21.961783 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:59:21.965264 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:59:21.967377 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:59:21.970420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:21.974531 jq[1526]: false Sep 12 23:59:21.975500 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:59:21.981215 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:59:21.984620 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:59:21.990607 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:59:21.995628 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:59:22.000588 extend-filesystems[1528]: Found loop3 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found loop4 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found loop5 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found sr0 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda1 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda2 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda3 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found usr Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda4 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda6 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda7 Sep 12 23:59:22.000588 extend-filesystems[1528]: Found vda9 Sep 12 23:59:22.000588 extend-filesystems[1528]: Checking size of /dev/vda9 Sep 12 23:59:22.034626 extend-filesystems[1528]: Resized partition /dev/vda9 Sep 12 23:59:22.001720 dbus-daemon[1524]: [system] SELinux support is enabled Sep 12 23:59:22.047356 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:59:22.011654 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:59:22.047493 extend-filesystems[1558]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:59:22.276683 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1256) Sep 12 23:59:22.014199 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:59:22.018395 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:59:22.279844 update_engine[1550]: I20250912 23:59:22.071419 1550 main.cc:92] Flatcar Update Engine starting Sep 12 23:59:22.279844 update_engine[1550]: I20250912 23:59:22.073022 1550 update_check_scheduler.cc:74] Next update check in 2m39s Sep 12 23:59:22.026842 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:59:22.036923 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:59:22.291486 jq[1554]: true Sep 12 23:59:22.053801 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:59:22.055910 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:59:22.273764 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:59:22.274239 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:59:22.276101 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:59:22.278427 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:59:22.278739 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:59:22.299174 jq[1570]: true Sep 12 23:59:22.310643 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:59:22.313910 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:59:22.314289 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:59:22.322251 systemd-logind[1545]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 23:59:22.322284 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 23:59:22.323707 systemd-logind[1545]: New seat seat0. Sep 12 23:59:22.326327 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:59:22.335106 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:59:22.335609 dbus-daemon[1524]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 23:59:22.336656 tar[1569]: linux-amd64/helm Sep 12 23:59:22.347801 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:59:22.416188 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:59:22.423398 extend-filesystems[1558]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:59:22.423398 extend-filesystems[1558]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:59:22.423398 extend-filesystems[1558]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:59:22.428542 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:59:22.416456 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:59:22.434718 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Sep 12 23:59:22.416634 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:59:22.418249 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:59:22.419200 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:59:22.422642 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:59:22.433967 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:59:22.443292 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:59:22.443693 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:59:22.549431 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:59:22.551767 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:59:22.555854 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:59:22.560491 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:59:22.572359 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:59:22.575800 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:59:22.594867 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:59:22.595602 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:59:22.604239 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:59:22.637161 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:59:22.656753 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:59:22.663068 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:59:22.667442 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:59:23.041822 containerd[1571]: time="2025-09-12T23:59:23.041510897Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:59:23.086616 containerd[1571]: time="2025-09-12T23:59:23.086229831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:23.088801 containerd[1571]: time="2025-09-12T23:59:23.088763814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:23.088883 containerd[1571]: time="2025-09-12T23:59:23.088868309Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:59:23.088977 containerd[1571]: time="2025-09-12T23:59:23.088959541Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:59:23.089294 containerd[1571]: time="2025-09-12T23:59:23.089272508Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:59:23.089364 containerd[1571]: time="2025-09-12T23:59:23.089350674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:23.089518 containerd[1571]: time="2025-09-12T23:59:23.089494965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:23.089578 containerd[1571]: time="2025-09-12T23:59:23.089565267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:23.090008 containerd[1571]: time="2025-09-12T23:59:23.089972590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:23.090579 containerd[1571]: time="2025-09-12T23:59:23.090079301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:23.090579 containerd[1571]: time="2025-09-12T23:59:23.090100280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:23.090579 containerd[1571]: time="2025-09-12T23:59:23.090111651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:23.090579 containerd[1571]: time="2025-09-12T23:59:23.090241966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:23.090579 containerd[1571]: time="2025-09-12T23:59:23.090542970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:23.090983 containerd[1571]: time="2025-09-12T23:59:23.090958209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:23.091075 containerd[1571]: time="2025-09-12T23:59:23.091034432Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:59:23.091676 containerd[1571]: time="2025-09-12T23:59:23.091654785Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:59:23.091865 containerd[1571]: time="2025-09-12T23:59:23.091845904Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:59:23.104212 containerd[1571]: time="2025-09-12T23:59:23.104069317Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:59:23.104674 containerd[1571]: time="2025-09-12T23:59:23.104619238Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:59:23.104847 containerd[1571]: time="2025-09-12T23:59:23.104811018Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.104944408Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.104987228Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.105291018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.105830870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106025505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106201756Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106226102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106244787Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106292837Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106345726Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106392173Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106427489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106449120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107071 containerd[1571]: time="2025-09-12T23:59:23.106474588Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106497601Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106549448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106574475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106614670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106645007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106691845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106752559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106804637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106872494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106908391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106941473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106958044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.106986668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.107002958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.107802 containerd[1571]: time="2025-09-12T23:59:23.107019720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:59:23.108490 containerd[1571]: time="2025-09-12T23:59:23.108452677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.108784 containerd[1571]: time="2025-09-12T23:59:23.108580817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.108972 containerd[1571]: time="2025-09-12T23:59:23.108926516Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:59:23.109442 containerd[1571]: time="2025-09-12T23:59:23.109402539Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:59:23.109766 containerd[1571]: time="2025-09-12T23:59:23.109684848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:59:23.109923 containerd[1571]: time="2025-09-12T23:59:23.109891125Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:59:23.110168 containerd[1571]: time="2025-09-12T23:59:23.110114053Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:59:23.110375 containerd[1571]: time="2025-09-12T23:59:23.110312715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.110623 containerd[1571]: time="2025-09-12T23:59:23.110589184Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:59:23.110807 containerd[1571]: time="2025-09-12T23:59:23.110780663Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:59:23.111054 containerd[1571]: time="2025-09-12T23:59:23.110965099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:59:23.112993 containerd[1571]: time="2025-09-12T23:59:23.112747211Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:59:23.113837 containerd[1571]: time="2025-09-12T23:59:23.113480636Z" level=info msg="Connect containerd service" Sep 12 23:59:23.113837 containerd[1571]: time="2025-09-12T23:59:23.113772514Z" level=info msg="using legacy CRI server" Sep 12 23:59:23.114248 containerd[1571]: time="2025-09-12T23:59:23.114084178Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:59:23.115163 containerd[1571]: time="2025-09-12T23:59:23.115024271Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:59:23.118127 containerd[1571]: time="2025-09-12T23:59:23.117559856Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:59:23.194855 containerd[1571]: time="2025-09-12T23:59:23.194780793Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:59:23.195999 containerd[1571]: time="2025-09-12T23:59:23.195227721Z" level=info msg="Start subscribing containerd event" Sep 12 23:59:23.196106 containerd[1571]: time="2025-09-12T23:59:23.196080250Z" level=info msg="Start recovering state" Sep 12 23:59:23.196240 containerd[1571]: time="2025-09-12T23:59:23.195973039Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:59:23.196419 containerd[1571]: time="2025-09-12T23:59:23.196227446Z" level=info msg="Start event monitor" Sep 12 23:59:23.196498 containerd[1571]: time="2025-09-12T23:59:23.196485660Z" level=info msg="Start snapshots syncer" Sep 12 23:59:23.196750 containerd[1571]: time="2025-09-12T23:59:23.196733044Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:59:23.196826 containerd[1571]: time="2025-09-12T23:59:23.196814437Z" level=info msg="Start streaming server" Sep 12 23:59:23.197189 containerd[1571]: time="2025-09-12T23:59:23.197154505Z" level=info msg="containerd successfully booted in 0.161040s" Sep 12 23:59:23.197604 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:59:23.382757 tar[1569]: linux-amd64/LICENSE Sep 12 23:59:23.383270 tar[1569]: linux-amd64/README.md Sep 12 23:59:23.401602 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:59:24.107072 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:59:24.142541 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:46712.service - OpenSSH per-connection server daemon (10.0.0.1:46712). Sep 12 23:59:24.195350 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 46712 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 12 23:59:24.197844 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:24.216848 systemd-logind[1545]: New session 1 of user core. Sep 12 23:59:24.222732 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:59:24.224980 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:59:24.227085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:24.229650 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:59:24.234565 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:24.251686 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:59:24.265384 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:59:24.269687 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:59:24.437090 systemd[1665]: Queued start job for default target default.target. Sep 12 23:59:24.437565 systemd[1665]: Created slice app.slice - User Application Slice. Sep 12 23:59:24.437583 systemd[1665]: Reached target paths.target - Paths. Sep 12 23:59:24.437596 systemd[1665]: Reached target timers.target - Timers. Sep 12 23:59:24.458263 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:59:24.467895 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:59:24.468002 systemd[1665]: Reached target sockets.target - Sockets. Sep 12 23:59:24.468023 systemd[1665]: Reached target basic.target - Basic System. Sep 12 23:59:24.468115 systemd[1665]: Reached target default.target - Main User Target. Sep 12 23:59:24.468163 systemd[1665]: Startup finished in 189ms. Sep 12 23:59:24.468842 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:59:24.472304 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:59:24.479560 systemd[1]: Startup finished in 8.779s (kernel) + 7.274s (userspace) = 16.054s. Sep 12 23:59:24.538530 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:46724.service - OpenSSH per-connection server daemon (10.0.0.1:46724). Sep 12 23:59:24.590749 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 46724 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 12 23:59:24.591877 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:24.599497 systemd-logind[1545]: New session 2 of user core. Sep 12 23:59:24.647626 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:59:24.712803 sshd[1683]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:24.720577 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:46726.service - OpenSSH per-connection server daemon (10.0.0.1:46726). Sep 12 23:59:24.721379 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:46724.service: Deactivated successfully. Sep 12 23:59:24.725442 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:59:24.726303 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:59:24.731853 systemd-logind[1545]: Removed session 2. Sep 12 23:59:24.766365 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 46726 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 12 23:59:24.768406 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:24.773426 systemd-logind[1545]: New session 3 of user core. Sep 12 23:59:24.783329 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:59:24.838907 sshd[1692]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:24.879545 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:46728.service - OpenSSH per-connection server daemon (10.0.0.1:46728). Sep 12 23:59:24.880283 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:46726.service: Deactivated successfully. Sep 12 23:59:24.883372 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:59:24.887578 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:59:24.888996 systemd-logind[1545]: Removed session 3. Sep 12 23:59:24.919105 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 46728 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 12 23:59:24.919735 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:24.925140 systemd-logind[1545]: New session 4 of user core. Sep 12 23:59:24.940598 kubelet[1660]: E0912 23:59:24.940534 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:24.941500 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:59:24.944078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:24.944365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:25.004577 sshd[1700]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:25.019754 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:46736.service - OpenSSH per-connection server daemon (10.0.0.1:46736). Sep 12 23:59:25.021066 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:46728.service: Deactivated successfully. Sep 12 23:59:25.024425 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:59:25.025272 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:59:25.027177 systemd-logind[1545]: Removed session 4. Sep 12 23:59:25.059827 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 46736 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 12 23:59:25.062554 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:25.068423 systemd-logind[1545]: New session 5 of user core. Sep 12 23:59:25.078386 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:59:25.144688 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:59:25.145201 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:25.176241 sudo[1718]: pam_unix(sudo:session): session closed for user root Sep 12 23:59:25.178662 sshd[1711]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:25.188300 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:46744.service - OpenSSH per-connection server daemon (10.0.0.1:46744). Sep 12 23:59:25.188797 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:46736.service: Deactivated successfully. Sep 12 23:59:25.191508 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:59:25.193239 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:59:25.194179 systemd-logind[1545]: Removed session 5. Sep 12 23:59:25.229712 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 46744 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 12 23:59:25.231724 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:25.236230 systemd-logind[1545]: New session 6 of user core. Sep 12 23:59:25.246317 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:59:25.305062 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:59:25.305561 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:25.310912 sudo[1728]: pam_unix(sudo:session): session closed for user root Sep 12 23:59:25.319125 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:59:25.319521 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:25.340447 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:59:25.342556 auditctl[1731]: No rules Sep 12 23:59:25.344577 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:59:25.345131 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:59:25.347568 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:59:25.383527 augenrules[1750]: No rules Sep 12 23:59:25.385907 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:59:25.387486 sudo[1727]: pam_unix(sudo:session): session closed for user root Sep 12 23:59:25.389644 sshd[1720]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:25.402395 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:46750.service - OpenSSH per-connection server daemon (10.0.0.1:46750). Sep 12 23:59:25.403081 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:46744.service: Deactivated successfully. Sep 12 23:59:25.407120 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:59:25.407907 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:59:25.409330 systemd-logind[1545]: Removed session 6. Sep 12 23:59:25.438828 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 46750 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 12 23:59:25.440546 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:25.445668 systemd-logind[1545]: New session 7 of user core. Sep 12 23:59:25.459431 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:59:25.515002 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:59:25.515459 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:26.250287 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:59:26.250589 (dockerd)[1781]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:59:26.743902 dockerd[1781]: time="2025-09-12T23:59:26.743717657Z" level=info msg="Starting up" Sep 12 23:59:27.584063 dockerd[1781]: time="2025-09-12T23:59:27.583987713Z" level=info msg="Loading containers: start." Sep 12 23:59:27.801083 kernel: Initializing XFRM netlink socket Sep 12 23:59:27.892314 systemd-networkd[1247]: docker0: Link UP Sep 12 23:59:27.920164 dockerd[1781]: time="2025-09-12T23:59:27.920116895Z" level=info msg="Loading containers: done." Sep 12 23:59:27.944068 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2703626333-merged.mount: Deactivated successfully. Sep 12 23:59:27.946478 dockerd[1781]: time="2025-09-12T23:59:27.946430152Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:59:27.946581 dockerd[1781]: time="2025-09-12T23:59:27.946554806Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:59:27.946734 dockerd[1781]: time="2025-09-12T23:59:27.946711730Z" level=info msg="Daemon has completed initialization" Sep 12 23:59:27.986298 dockerd[1781]: time="2025-09-12T23:59:27.986210434Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:59:27.986487 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:59:28.993160 containerd[1571]: time="2025-09-12T23:59:28.993069705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 23:59:29.670022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629445241.mount: Deactivated successfully. Sep 12 23:59:31.218653 containerd[1571]: time="2025-09-12T23:59:31.218577656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:31.219305 containerd[1571]: time="2025-09-12T23:59:31.219242533Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 23:59:31.220756 containerd[1571]: time="2025-09-12T23:59:31.220704916Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:31.224466 containerd[1571]: time="2025-09-12T23:59:31.224405746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:31.225891 containerd[1571]: time="2025-09-12T23:59:31.225852559Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.232714457s" Sep 12 23:59:31.225947 containerd[1571]: time="2025-09-12T23:59:31.225905068Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 23:59:31.226978 containerd[1571]: time="2025-09-12T23:59:31.226948785Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 23:59:32.751406 containerd[1571]: time="2025-09-12T23:59:32.751303234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:32.752292 containerd[1571]: time="2025-09-12T23:59:32.752222388Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 23:59:32.753654 containerd[1571]: time="2025-09-12T23:59:32.753616572Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:32.758800 containerd[1571]: time="2025-09-12T23:59:32.758733158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:32.760713 containerd[1571]: time="2025-09-12T23:59:32.760658949Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.533670761s" Sep 12 23:59:32.760713 containerd[1571]: time="2025-09-12T23:59:32.760713792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 23:59:32.761326 containerd[1571]: time="2025-09-12T23:59:32.761291445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 23:59:34.362295 containerd[1571]: time="2025-09-12T23:59:34.362201623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:34.363016 containerd[1571]: time="2025-09-12T23:59:34.362916654Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 23:59:34.364610 containerd[1571]: time="2025-09-12T23:59:34.364561128Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:34.367770 containerd[1571]: time="2025-09-12T23:59:34.367727607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:34.369280 containerd[1571]: time="2025-09-12T23:59:34.369252476Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.607907441s" Sep 12 23:59:34.369341 containerd[1571]: time="2025-09-12T23:59:34.369285659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 23:59:34.369942 containerd[1571]: time="2025-09-12T23:59:34.369912424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 23:59:35.195149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:59:35.264924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:35.690179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:35.696151 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:35.917541 kubelet[2005]: E0912 23:59:35.917423 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:35.924522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:35.924902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:36.314918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount598324467.mount: Deactivated successfully. Sep 12 23:59:37.034218 containerd[1571]: time="2025-09-12T23:59:37.034084237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:37.035037 containerd[1571]: time="2025-09-12T23:59:37.034830937Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 23:59:37.036605 containerd[1571]: time="2025-09-12T23:59:37.036510036Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:37.039183 containerd[1571]: time="2025-09-12T23:59:37.039128327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:37.041263 containerd[1571]: time="2025-09-12T23:59:37.041217194Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.671250208s" Sep 12 23:59:37.041432 containerd[1571]: time="2025-09-12T23:59:37.041280913Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 23:59:37.043439 containerd[1571]: time="2025-09-12T23:59:37.043375632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:59:37.805625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685374045.mount: Deactivated successfully. Sep 12 23:59:39.530814 containerd[1571]: time="2025-09-12T23:59:39.530716113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:39.531770 containerd[1571]: time="2025-09-12T23:59:39.531618385Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 23:59:39.533554 containerd[1571]: time="2025-09-12T23:59:39.533516966Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:39.538035 containerd[1571]: time="2025-09-12T23:59:39.537976309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:39.539557 containerd[1571]: time="2025-09-12T23:59:39.539522769Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.496076264s" Sep 12 23:59:39.539611 containerd[1571]: time="2025-09-12T23:59:39.539559307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 23:59:39.540228 containerd[1571]: time="2025-09-12T23:59:39.540198646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:59:43.171069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633439488.mount: Deactivated successfully. Sep 12 23:59:43.258027 containerd[1571]: time="2025-09-12T23:59:43.257950448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:43.345102 containerd[1571]: time="2025-09-12T23:59:43.344977495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 23:59:43.388991 containerd[1571]: time="2025-09-12T23:59:43.388879416Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:43.512983 containerd[1571]: time="2025-09-12T23:59:43.512900991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:43.514017 containerd[1571]: time="2025-09-12T23:59:43.513976608Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.973740181s" Sep 12 23:59:43.514110 containerd[1571]: time="2025-09-12T23:59:43.514017014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 23:59:43.514615 containerd[1571]: time="2025-09-12T23:59:43.514574189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 23:59:45.479118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539503102.mount: Deactivated successfully. Sep 12 23:59:45.954027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:59:45.982710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:46.214453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:46.222144 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:46.302746 kubelet[2115]: E0912 23:59:46.302653 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:46.308144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:46.308531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:49.116292 containerd[1571]: time="2025-09-12T23:59:49.116198224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:49.120367 containerd[1571]: time="2025-09-12T23:59:49.120174611Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 23:59:49.123664 containerd[1571]: time="2025-09-12T23:59:49.123597941Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:49.147675 containerd[1571]: time="2025-09-12T23:59:49.147561120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:49.149106 containerd[1571]: time="2025-09-12T23:59:49.148955034Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.634334508s" Sep 12 23:59:49.149106 containerd[1571]: time="2025-09-12T23:59:49.149014005Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 23:59:51.949800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:51.960260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:51.987630 systemd[1]: Reloading requested from client PID 2186 ('systemctl') (unit session-7.scope)... Sep 12 23:59:51.987646 systemd[1]: Reloading... Sep 12 23:59:52.106098 zram_generator::config[2226]: No configuration found. Sep 12 23:59:52.307392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:52.387814 systemd[1]: Reloading finished in 399 ms. Sep 12 23:59:52.440524 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:59:52.440672 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:59:52.441234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:52.456388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:52.639470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:52.646208 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:59:52.708615 kubelet[2285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:52.708615 kubelet[2285]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:59:52.708615 kubelet[2285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:52.709257 kubelet[2285]: I0912 23:59:52.708649 2285 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:59:52.871834 kubelet[2285]: I0912 23:59:52.871769 2285 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:59:52.871834 kubelet[2285]: I0912 23:59:52.871812 2285 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:59:52.872151 kubelet[2285]: I0912 23:59:52.872120 2285 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:59:52.922314 kubelet[2285]: E0912 23:59:52.922148 2285 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:52.923286 kubelet[2285]: I0912 23:59:52.923254 2285 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:59:52.935333 kubelet[2285]: E0912 23:59:52.935278 2285 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:59:52.935333 kubelet[2285]: I0912 23:59:52.935315 2285 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:59:52.942476 kubelet[2285]: I0912 23:59:52.942427 2285 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:59:52.943621 kubelet[2285]: I0912 23:59:52.943584 2285 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:59:52.943825 kubelet[2285]: I0912 23:59:52.943769 2285 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:59:52.944061 kubelet[2285]: I0912 23:59:52.943810 2285 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 23:59:52.944209 kubelet[2285]: I0912 23:59:52.944079 2285 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:59:52.944209 kubelet[2285]: I0912 23:59:52.944095 2285 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:59:52.944305 kubelet[2285]: I0912 23:59:52.944283 2285 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:52.947689 kubelet[2285]: I0912 23:59:52.947660 2285 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:59:52.947689 kubelet[2285]: I0912 23:59:52.947689 2285 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:59:52.947772 kubelet[2285]: I0912 23:59:52.947742 2285 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:59:52.947793 kubelet[2285]: I0912 23:59:52.947778 2285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:59:52.951558 kubelet[2285]: I0912 23:59:52.951539 2285 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:59:52.953775 kubelet[2285]: I0912 23:59:52.951916 2285 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:59:52.953775 kubelet[2285]: W0912 23:59:52.953381 2285 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:59:52.955079 kubelet[2285]: W0912 23:59:52.954539 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:52.955079 kubelet[2285]: E0912 23:59:52.954616 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:52.955079 kubelet[2285]: W0912 23:59:52.954626 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:52.955079 kubelet[2285]: E0912 23:59:52.954694 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:52.956489 kubelet[2285]: I0912 23:59:52.955599 2285 server.go:1274] "Started kubelet" Sep 12 23:59:52.956489 kubelet[2285]: I0912 23:59:52.955929 2285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:59:52.956489 kubelet[2285]: I0912 23:59:52.956205 2285 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:59:52.956489 kubelet[2285]: I0912 23:59:52.956422 2285 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:59:52.957258 kubelet[2285]: I0912 23:59:52.957227 2285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:59:52.957427 kubelet[2285]: I0912 23:59:52.957401 2285 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:59:52.959398 kubelet[2285]: I0912 23:59:52.959375 2285 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:59:52.961831 kubelet[2285]: E0912 23:59:52.961797 2285 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:59:52.963463 kubelet[2285]: I0912 23:59:52.962191 2285 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:59:52.963463 kubelet[2285]: I0912 23:59:52.962313 2285 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:59:52.963463 kubelet[2285]: I0912 23:59:52.962367 2285 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:59:52.963463 kubelet[2285]: W0912 23:59:52.962657 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:52.963463 kubelet[2285]: E0912 23:59:52.962703 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:52.963463 kubelet[2285]: E0912 23:59:52.962916 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:59:52.963463 kubelet[2285]: E0912 23:59:52.962986 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Sep 12 23:59:52.963854 kubelet[2285]: I0912 23:59:52.963833 2285 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:59:52.963854 kubelet[2285]: I0912 23:59:52.963849 2285 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:59:52.963950 kubelet[2285]: I0912 23:59:52.963929 2285 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:59:52.963982 kubelet[2285]: E0912 23:59:52.962547 2285 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ae79da268145 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:59:52.955568453 +0000 UTC m=+0.297021422,LastTimestamp:2025-09-12 23:59:52.955568453 +0000 UTC m=+0.297021422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:59:52.992973 kubelet[2285]: I0912 23:59:52.992911 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:59:52.995214 kubelet[2285]: I0912 23:59:52.994620 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:59:52.995214 kubelet[2285]: I0912 23:59:52.994657 2285 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:59:52.995214 kubelet[2285]: I0912 23:59:52.994687 2285 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:59:52.995214 kubelet[2285]: E0912 23:59:52.994730 2285 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:59:52.995658 kubelet[2285]: W0912 23:59:52.995630 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:52.995708 kubelet[2285]: E0912 23:59:52.995694 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:53.000106 kubelet[2285]: I0912 23:59:53.000078 2285 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:59:53.000106 kubelet[2285]: I0912 23:59:53.000093 2285 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:59:53.000106 kubelet[2285]: I0912 23:59:53.000110 2285 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:53.063161 kubelet[2285]: E0912 23:59:53.063083 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:59:53.095424 kubelet[2285]: E0912 23:59:53.095322 2285 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:59:53.163892 kubelet[2285]: E0912 23:59:53.163823 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:59:53.164338 kubelet[2285]: E0912 23:59:53.164311 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Sep 12 23:59:53.264467 kubelet[2285]: E0912 23:59:53.264394 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:59:53.295576 kubelet[2285]: E0912 23:59:53.295509 2285 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:59:53.365026 kubelet[2285]: E0912 23:59:53.364966 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:59:53.465300 kubelet[2285]: E0912 23:59:53.465232 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:59:53.493831 kubelet[2285]: I0912 23:59:53.493750 2285 policy_none.go:49] "None policy: Start" Sep 12 23:59:53.494895 kubelet[2285]: I0912 23:59:53.494854 2285 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:59:53.494945 kubelet[2285]: I0912 23:59:53.494911 2285 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:59:53.503662 kubelet[2285]: I0912 23:59:53.503629 2285 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:59:53.503952 kubelet[2285]: I0912 23:59:53.503929 2285 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:59:53.503990 kubelet[2285]: I0912 23:59:53.503955 2285 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:59:53.504983 kubelet[2285]: I0912 23:59:53.504961 2285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:59:53.505922 kubelet[2285]: E0912 23:59:53.505890 2285 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:59:53.565274 kubelet[2285]: E0912 23:59:53.565092 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Sep 12 23:59:53.605906 kubelet[2285]: I0912 23:59:53.605860 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:59:53.606308 kubelet[2285]: E0912 23:59:53.606266 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 23:59:53.807851 kubelet[2285]: I0912 23:59:53.807785 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:59:53.808417 kubelet[2285]: E0912 23:59:53.808205 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 23:59:53.867915 kubelet[2285]: I0912 23:59:53.867713 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:59:53.867915 kubelet[2285]: I0912 23:59:53.867769 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dbc552506d757b7a16d8ab3d415538b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dbc552506d757b7a16d8ab3d415538b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:59:53.867915 kubelet[2285]: I0912 23:59:53.867790 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dbc552506d757b7a16d8ab3d415538b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dbc552506d757b7a16d8ab3d415538b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:59:53.867915 kubelet[2285]: I0912 23:59:53.867806 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dbc552506d757b7a16d8ab3d415538b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3dbc552506d757b7a16d8ab3d415538b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:59:53.867915 kubelet[2285]: I0912 23:59:53.867844 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:59:53.868200 kubelet[2285]: I0912 23:59:53.867859 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:59:53.868200 kubelet[2285]: I0912 23:59:53.867874 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:59:53.868200 kubelet[2285]: I0912 23:59:53.867888 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:59:53.868200 kubelet[2285]: I0912 23:59:53.867907 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:59:53.889323 kubelet[2285]: W0912 23:59:53.889233 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:53.889450 kubelet[2285]: E0912 23:59:53.889331 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:54.001806 kubelet[2285]: E0912 23:59:54.001758 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:54.002575 containerd[1571]: time="2025-09-12T23:59:54.002523867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3dbc552506d757b7a16d8ab3d415538b,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:54.003610 kubelet[2285]: E0912 23:59:54.003587 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:54.003953 containerd[1571]: time="2025-09-12T23:59:54.003922340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:54.004025 kubelet[2285]: E0912 23:59:54.003990 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:54.004303 containerd[1571]: time="2025-09-12T23:59:54.004278845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:54.173583 kubelet[2285]: W0912 23:59:54.173342 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:54.173583 kubelet[2285]: E0912 23:59:54.173479 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:54.210406 kubelet[2285]: I0912 23:59:54.210345 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:59:54.210734 kubelet[2285]: E0912 23:59:54.210693 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 23:59:54.238687 kubelet[2285]: W0912 23:59:54.238597 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:54.238687 kubelet[2285]: E0912 23:59:54.238676 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:54.298463 kubelet[2285]: W0912 23:59:54.298329 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:54.298463 kubelet[2285]: E0912 23:59:54.298448 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:54.366882 kubelet[2285]: E0912 23:59:54.366791 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Sep 12 23:59:55.012717 kubelet[2285]: I0912 23:59:55.012665 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:59:55.016073 kubelet[2285]: E0912 23:59:55.015997 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 23:59:55.116310 kubelet[2285]: E0912 23:59:55.116230 2285 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:55.619850 kubelet[2285]: W0912 23:59:55.619774 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:55.619850 kubelet[2285]: E0912 23:59:55.619826 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:55.967984 kubelet[2285]: E0912 23:59:55.967905 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="3.2s" Sep 12 23:59:56.192297 kubelet[2285]: W0912 23:59:56.192243 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:56.192807 kubelet[2285]: E0912 23:59:56.192303 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:56.618354 kubelet[2285]: I0912 23:59:56.618309 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:59:56.618882 kubelet[2285]: E0912 23:59:56.618815 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Sep 12 23:59:56.746077 kubelet[2285]: W0912 23:59:56.745969 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:56.746077 kubelet[2285]: E0912 23:59:56.746033 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:56.847499 kubelet[2285]: W0912 23:59:56.847411 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Sep 12 23:59:56.847499 kubelet[2285]: E0912 23:59:56.847483 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:57.046453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130500343.mount: Deactivated successfully. Sep 12 23:59:57.054751 containerd[1571]: time="2025-09-12T23:59:57.054677462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:57.056794 containerd[1571]: time="2025-09-12T23:59:57.056729584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:59:57.058326 containerd[1571]: time="2025-09-12T23:59:57.058273933Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:57.059523 containerd[1571]: time="2025-09-12T23:59:57.059492127Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:57.060697 containerd[1571]: time="2025-09-12T23:59:57.060637733Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:57.061525 containerd[1571]: time="2025-09-12T23:59:57.061473444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:59:57.062462 containerd[1571]: time="2025-09-12T23:59:57.062402214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 23:59:57.064360 containerd[1571]: time="2025-09-12T23:59:57.064311392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:57.066417 containerd[1571]: time="2025-09-12T23:59:57.066373594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.062051034s" Sep 12 23:59:57.067201 containerd[1571]: time="2025-09-12T23:59:57.067153417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.063166354s" Sep 12 23:59:57.069684 containerd[1571]: time="2025-09-12T23:59:57.069653158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.067028476s" Sep 12 23:59:57.334322 containerd[1571]: time="2025-09-12T23:59:57.334117328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:57.334322 containerd[1571]: time="2025-09-12T23:59:57.334174929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:57.334322 containerd[1571]: time="2025-09-12T23:59:57.334188765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.335535 containerd[1571]: time="2025-09-12T23:59:57.335228487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:57.335535 containerd[1571]: time="2025-09-12T23:59:57.335470241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.339072 containerd[1571]: time="2025-09-12T23:59:57.336953993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:57.339072 containerd[1571]: time="2025-09-12T23:59:57.337634056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.339072 containerd[1571]: time="2025-09-12T23:59:57.337955382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.348644 containerd[1571]: time="2025-09-12T23:59:57.347624320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:57.348644 containerd[1571]: time="2025-09-12T23:59:57.347689846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:57.348644 containerd[1571]: time="2025-09-12T23:59:57.347704293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.348644 containerd[1571]: time="2025-09-12T23:59:57.347840815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.437208 containerd[1571]: time="2025-09-12T23:59:57.437156228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8ec5381b5a39de26304c721335c1499e178a3dfcb4c02cf2e2ba39f587cfa9c\"" Sep 12 23:59:57.437607 containerd[1571]: time="2025-09-12T23:59:57.437578138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6839123a451769aa75eae77e8635627d7a4fe549a80ccc973f9c058b47abb4b0\"" Sep 12 23:59:57.439762 kubelet[2285]: E0912 23:59:57.439499 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:57.439762 kubelet[2285]: E0912 23:59:57.439563 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:57.443108 containerd[1571]: time="2025-09-12T23:59:57.443013332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3dbc552506d757b7a16d8ab3d415538b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf872ee7159c9ac83e44d6a200adfc9abeac34ae7f302fd77e25e724c84a7825\"" Sep 12 23:59:57.444845 kubelet[2285]: E0912 23:59:57.444815 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:57.446399 containerd[1571]: time="2025-09-12T23:59:57.446373911Z" level=info msg="CreateContainer within sandbox \"d8ec5381b5a39de26304c721335c1499e178a3dfcb4c02cf2e2ba39f587cfa9c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:59:57.446461 containerd[1571]: time="2025-09-12T23:59:57.446438606Z" level=info msg="CreateContainer within sandbox \"cf872ee7159c9ac83e44d6a200adfc9abeac34ae7f302fd77e25e724c84a7825\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:59:57.446519 containerd[1571]: time="2025-09-12T23:59:57.446491817Z" level=info msg="CreateContainer within sandbox \"6839123a451769aa75eae77e8635627d7a4fe549a80ccc973f9c058b47abb4b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:59:57.484178 containerd[1571]: time="2025-09-12T23:59:57.484115301Z" level=info msg="CreateContainer within sandbox \"cf872ee7159c9ac83e44d6a200adfc9abeac34ae7f302fd77e25e724c84a7825\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3fa56ed6783032b945c3dc92817b67a809d30359af008370202a9493af79a727\"" Sep 12 23:59:57.485201 containerd[1571]: time="2025-09-12T23:59:57.485100969Z" level=info msg="StartContainer for \"3fa56ed6783032b945c3dc92817b67a809d30359af008370202a9493af79a727\"" Sep 12 23:59:57.491278 containerd[1571]: time="2025-09-12T23:59:57.491229563Z" level=info msg="CreateContainer within sandbox \"6839123a451769aa75eae77e8635627d7a4fe549a80ccc973f9c058b47abb4b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5b4ee38e3de015084b59b5af4427bca811b7f027755b39bbf9b00e469faa9835\"" Sep 12 23:59:57.491860 containerd[1571]: time="2025-09-12T23:59:57.491823992Z" level=info msg="StartContainer for \"5b4ee38e3de015084b59b5af4427bca811b7f027755b39bbf9b00e469faa9835\"" Sep 12 23:59:57.521517 containerd[1571]: time="2025-09-12T23:59:57.521351362Z" level=info msg="CreateContainer within sandbox \"d8ec5381b5a39de26304c721335c1499e178a3dfcb4c02cf2e2ba39f587cfa9c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a764bd82aa200dfef0cb031d12e698912c055ca2cc70e0071bb3ee181625510\"" Sep 12 23:59:57.522403 containerd[1571]: time="2025-09-12T23:59:57.522349976Z" level=info msg="StartContainer for \"4a764bd82aa200dfef0cb031d12e698912c055ca2cc70e0071bb3ee181625510\"" Sep 12 23:59:57.603576 containerd[1571]: time="2025-09-12T23:59:57.602236883Z" level=info msg="StartContainer for \"5b4ee38e3de015084b59b5af4427bca811b7f027755b39bbf9b00e469faa9835\" returns successfully" Sep 12 23:59:57.605023 containerd[1571]: time="2025-09-12T23:59:57.604973908Z" level=info msg="StartContainer for \"3fa56ed6783032b945c3dc92817b67a809d30359af008370202a9493af79a727\" returns successfully" Sep 12 23:59:57.620389 containerd[1571]: time="2025-09-12T23:59:57.620335232Z" level=info msg="StartContainer for \"4a764bd82aa200dfef0cb031d12e698912c055ca2cc70e0071bb3ee181625510\" returns successfully" Sep 12 23:59:58.017520 kubelet[2285]: E0912 23:59:58.017412 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:58.019944 kubelet[2285]: E0912 23:59:58.019789 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:58.024158 kubelet[2285]: E0912 23:59:58.023905 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:59.025246 kubelet[2285]: E0912 23:59:59.025187 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:59.025746 kubelet[2285]: E0912 23:59:59.025386 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:59:59.477868 kubelet[2285]: E0912 23:59:59.477802 2285 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 23:59:59.821392 kubelet[2285]: I0912 23:59:59.821011 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:59:59.861418 kubelet[2285]: I0912 23:59:59.860869 2285 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 23:59:59.861418 kubelet[2285]: E0912 23:59:59.860937 2285 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 23:59:59.878715 kubelet[2285]: E0912 23:59:59.878666 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:59:59.979876 kubelet[2285]: E0912 23:59:59.979771 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:00:00.080958 kubelet[2285]: E0913 00:00:00.080790 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:00:00.221546 kubelet[2285]: E0913 00:00:00.220011 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:00:00.955584 kubelet[2285]: I0913 00:00:00.955223 2285 apiserver.go:52] "Watching apiserver" Sep 13 00:00:00.968358 kubelet[2285]: I0913 00:00:00.968291 2285 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:00:01.558898 kubelet[2285]: E0913 00:00:01.558848 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:01.602580 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 13 00:00:01.616078 systemd[1]: logrotate.service: Deactivated successfully. Sep 13 00:00:01.619382 systemd[1]: Reloading requested from client PID 2566 ('systemctl') (unit session-7.scope)... Sep 13 00:00:01.619401 systemd[1]: Reloading... Sep 13 00:00:01.711088 zram_generator::config[2610]: No configuration found. Sep 13 00:00:01.847384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:00:01.938005 systemd[1]: Reloading finished in 318 ms. Sep 13 00:00:01.978613 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:00:02.003757 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:00:02.004332 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:00:02.010503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:00:02.194755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:00:02.200980 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:00:02.265439 kubelet[2662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:00:02.265439 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:00:02.265439 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:00:02.265975 kubelet[2662]: I0913 00:00:02.265518 2662 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:00:02.271884 kubelet[2662]: I0913 00:00:02.271834 2662 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:00:02.271884 kubelet[2662]: I0913 00:00:02.271873 2662 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:00:02.272237 kubelet[2662]: I0913 00:00:02.272213 2662 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:00:02.273607 kubelet[2662]: I0913 00:00:02.273582 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:00:02.275594 kubelet[2662]: I0913 00:00:02.275564 2662 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:00:02.279011 kubelet[2662]: E0913 00:00:02.278949 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:00:02.279011 kubelet[2662]: I0913 00:00:02.279002 2662 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:00:02.284261 kubelet[2662]: I0913 00:00:02.284215 2662 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:00:02.285068 kubelet[2662]: I0913 00:00:02.284692 2662 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:00:02.285068 kubelet[2662]: I0913 00:00:02.284836 2662 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:00:02.285169 kubelet[2662]: I0913 00:00:02.284890 2662 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:00:02.285252 kubelet[2662]: I0913 00:00:02.285181 2662 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:00:02.285252 kubelet[2662]: I0913 00:00:02.285193 2662 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:00:02.285252 kubelet[2662]: I0913 00:00:02.285222 2662 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:00:02.285405 kubelet[2662]: I0913 00:00:02.285358 2662 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:00:02.285485 kubelet[2662]: I0913 00:00:02.285437 2662 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:00:02.285540 kubelet[2662]: I0913 00:00:02.285526 2662 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:00:02.285566 kubelet[2662]: I0913 00:00:02.285547 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:00:02.287115 kubelet[2662]: I0913 00:00:02.287084 2662 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:00:02.287525 kubelet[2662]: I0913 00:00:02.287492 2662 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:00:02.288057 kubelet[2662]: I0913 00:00:02.287938 2662 server.go:1274] "Started kubelet" Sep 13 00:00:02.288681 kubelet[2662]: I0913 00:00:02.288641 2662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:00:02.288979 kubelet[2662]: I0913 00:00:02.288936 2662 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:00:02.290069 kubelet[2662]: I0913 00:00:02.288953 2662 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:00:02.290904 kubelet[2662]: I0913 00:00:02.290247 2662 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:00:02.296659 kubelet[2662]: I0913 00:00:02.292887 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:00:02.296659 kubelet[2662]: I0913 00:00:02.294569 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:00:02.296875 kubelet[2662]: E0913 00:00:02.296821 2662 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:00:02.298318 kubelet[2662]: I0913 00:00:02.298300 2662 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:00:02.298468 kubelet[2662]: I0913 00:00:02.298456 2662 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:00:02.298669 kubelet[2662]: I0913 00:00:02.298655 2662 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:00:02.299273 kubelet[2662]: I0913 00:00:02.299257 2662 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:00:02.299427 kubelet[2662]: I0913 00:00:02.299410 2662 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:00:02.300786 kubelet[2662]: I0913 00:00:02.300770 2662 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:00:02.309880 kubelet[2662]: I0913 00:00:02.309810 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:00:02.311675 kubelet[2662]: I0913 00:00:02.311641 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:00:02.311721 kubelet[2662]: I0913 00:00:02.311686 2662 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:00:02.311766 kubelet[2662]: I0913 00:00:02.311721 2662 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:00:02.311825 kubelet[2662]: E0913 00:00:02.311790 2662 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:00:02.353436 kubelet[2662]: I0913 00:00:02.353399 2662 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:00:02.353436 kubelet[2662]: I0913 00:00:02.353424 2662 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:00:02.353581 kubelet[2662]: I0913 00:00:02.353463 2662 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:00:02.353671 kubelet[2662]: I0913 00:00:02.353654 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:00:02.353695 kubelet[2662]: I0913 00:00:02.353669 2662 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:00:02.353695 kubelet[2662]: I0913 00:00:02.353690 2662 policy_none.go:49] "None policy: Start" Sep 13 00:00:02.354265 kubelet[2662]: I0913 00:00:02.354248 2662 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:00:02.354265 kubelet[2662]: I0913 00:00:02.354268 2662 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:00:02.354414 kubelet[2662]: I0913 00:00:02.354397 2662 state_mem.go:75] "Updated machine memory state" Sep 13 00:00:02.356099 kubelet[2662]: I0913 00:00:02.356080 2662 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:00:02.356300 kubelet[2662]: I0913 00:00:02.356277 2662 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:00:02.356325 kubelet[2662]: I0913 00:00:02.356294 2662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:00:02.356880 kubelet[2662]: I0913 00:00:02.356860 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:00:02.419878 kubelet[2662]: E0913 00:00:02.419816 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:00:02.463960 kubelet[2662]: I0913 00:00:02.463298 2662 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:00:02.470774 kubelet[2662]: I0913 00:00:02.470728 2662 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:00:02.470868 kubelet[2662]: I0913 00:00:02.470849 2662 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:00:02.500344 kubelet[2662]: I0913 00:00:02.500260 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dbc552506d757b7a16d8ab3d415538b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dbc552506d757b7a16d8ab3d415538b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:00:02.500344 kubelet[2662]: I0913 00:00:02.500325 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dbc552506d757b7a16d8ab3d415538b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3dbc552506d757b7a16d8ab3d415538b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:00:02.500344 kubelet[2662]: I0913 00:00:02.500357 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:00:02.500613 kubelet[2662]: I0913 00:00:02.500381 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:00:02.500613 kubelet[2662]: I0913 00:00:02.500401 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dbc552506d757b7a16d8ab3d415538b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3dbc552506d757b7a16d8ab3d415538b\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:00:02.500613 kubelet[2662]: I0913 00:00:02.500417 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:00:02.500613 kubelet[2662]: I0913 00:00:02.500433 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:00:02.500613 kubelet[2662]: I0913 00:00:02.500474 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:00:02.500785 kubelet[2662]: I0913 00:00:02.500493 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:00:02.718090 kubelet[2662]: E0913 00:00:02.717876 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:02.720707 kubelet[2662]: E0913 00:00:02.720578 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:02.720707 kubelet[2662]: E0913 00:00:02.720615 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:03.287728 kubelet[2662]: I0913 00:00:03.287640 2662 apiserver.go:52] "Watching apiserver" Sep 13 00:00:03.299066 kubelet[2662]: I0913 00:00:03.299010 2662 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:00:03.326781 kubelet[2662]: E0913 00:00:03.326705 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:03.327164 kubelet[2662]: E0913 00:00:03.327095 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:03.333796 kubelet[2662]: E0913 00:00:03.333736 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:00:03.333796 kubelet[2662]: E0913 00:00:03.334011 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:03.348143 kubelet[2662]: I0913 00:00:03.348018 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.347989913 podStartE2EDuration="1.347989913s" podCreationTimestamp="2025-09-13 00:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:03.347638253 +0000 UTC m=+1.141486699" watchObservedRunningTime="2025-09-13 00:00:03.347989913 +0000 UTC m=+1.141838359" Sep 13 00:00:03.722244 kubelet[2662]: I0913 00:00:03.721627 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.721607975 podStartE2EDuration="2.721607975s" podCreationTimestamp="2025-09-13 00:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:03.721414908 +0000 UTC m=+1.515263354" watchObservedRunningTime="2025-09-13 00:00:03.721607975 +0000 UTC m=+1.515456421" Sep 13 00:00:03.722244 kubelet[2662]: I0913 00:00:03.721875 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7218697330000001 podStartE2EDuration="1.721869733s" podCreationTimestamp="2025-09-13 00:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:03.524822839 +0000 UTC m=+1.318671285" watchObservedRunningTime="2025-09-13 00:00:03.721869733 +0000 UTC m=+1.515718179" Sep 13 00:00:04.327248 kubelet[2662]: E0913 00:00:04.327191 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:05.329288 kubelet[2662]: E0913 00:00:05.329226 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:07.217006 kubelet[2662]: I0913 00:00:07.216945 2662 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:00:07.217685 containerd[1571]: time="2025-09-13T00:00:07.217612912Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:00:07.218007 kubelet[2662]: I0913 00:00:07.217981 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:00:07.454835 update_engine[1550]: I20250913 00:00:07.454689 1550 update_attempter.cc:509] Updating boot flags... Sep 13 00:00:07.496570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2718) Sep 13 00:00:07.534203 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2721) Sep 13 00:00:07.577102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2721) Sep 13 00:00:07.904851 kubelet[2662]: E0913 00:00:07.904645 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:08.137771 kubelet[2662]: I0913 00:00:08.137695 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42145e33-50cc-4079-998e-e9393d873805-lib-modules\") pod \"kube-proxy-bxwnl\" (UID: \"42145e33-50cc-4079-998e-e9393d873805\") " pod="kube-system/kube-proxy-bxwnl" Sep 13 00:00:08.137771 kubelet[2662]: I0913 00:00:08.137752 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dds\" (UniqueName: \"kubernetes.io/projected/42145e33-50cc-4079-998e-e9393d873805-kube-api-access-p7dds\") pod \"kube-proxy-bxwnl\" (UID: \"42145e33-50cc-4079-998e-e9393d873805\") " pod="kube-system/kube-proxy-bxwnl" Sep 13 00:00:08.137771 kubelet[2662]: I0913 00:00:08.137778 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42145e33-50cc-4079-998e-e9393d873805-kube-proxy\") pod \"kube-proxy-bxwnl\" (UID: \"42145e33-50cc-4079-998e-e9393d873805\") " pod="kube-system/kube-proxy-bxwnl" Sep 13 00:00:08.138028 kubelet[2662]: I0913 00:00:08.137798 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42145e33-50cc-4079-998e-e9393d873805-xtables-lock\") pod \"kube-proxy-bxwnl\" (UID: \"42145e33-50cc-4079-998e-e9393d873805\") " pod="kube-system/kube-proxy-bxwnl" Sep 13 00:00:08.338064 kubelet[2662]: E0913 00:00:08.337989 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:08.340647 kubelet[2662]: I0913 00:00:08.339565 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a817601-dfc9-43ba-a7f4-4a0cdebddd5f-var-lib-calico\") pod \"tigera-operator-58fc44c59b-7qtlb\" (UID: \"0a817601-dfc9-43ba-a7f4-4a0cdebddd5f\") " pod="tigera-operator/tigera-operator-58fc44c59b-7qtlb" Sep 13 00:00:08.340647 kubelet[2662]: I0913 00:00:08.339614 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qlzc\" (UniqueName: \"kubernetes.io/projected/0a817601-dfc9-43ba-a7f4-4a0cdebddd5f-kube-api-access-5qlzc\") pod \"tigera-operator-58fc44c59b-7qtlb\" (UID: \"0a817601-dfc9-43ba-a7f4-4a0cdebddd5f\") " pod="tigera-operator/tigera-operator-58fc44c59b-7qtlb" Sep 13 00:00:08.376095 kubelet[2662]: E0913 00:00:08.375856 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:08.376904 containerd[1571]: time="2025-09-13T00:00:08.376852395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxwnl,Uid:42145e33-50cc-4079-998e-e9393d873805,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:08.414739 containerd[1571]: time="2025-09-13T00:00:08.414560658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:08.414739 containerd[1571]: time="2025-09-13T00:00:08.414675446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:08.414739 containerd[1571]: time="2025-09-13T00:00:08.414694212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:08.415191 containerd[1571]: time="2025-09-13T00:00:08.414858022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:08.468656 containerd[1571]: time="2025-09-13T00:00:08.468604044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxwnl,Uid:42145e33-50cc-4079-998e-e9393d873805,Namespace:kube-system,Attempt:0,} returns sandbox id \"9045341d622e5a7c0dfae7914440e92f8d13b187d9f38d0bdfddbcf2cc89c2c9\"" Sep 13 00:00:08.469445 kubelet[2662]: E0913 00:00:08.469406 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:08.477243 containerd[1571]: time="2025-09-13T00:00:08.476408126Z" level=info msg="CreateContainer within sandbox \"9045341d622e5a7c0dfae7914440e92f8d13b187d9f38d0bdfddbcf2cc89c2c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:00:08.595652 containerd[1571]: time="2025-09-13T00:00:08.595511157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-7qtlb,Uid:0a817601-dfc9-43ba-a7f4-4a0cdebddd5f,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:00:08.668813 containerd[1571]: time="2025-09-13T00:00:08.668753703Z" level=info msg="CreateContainer within sandbox \"9045341d622e5a7c0dfae7914440e92f8d13b187d9f38d0bdfddbcf2cc89c2c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"916c6d32e3527b09ce4202a3e41cb10eb69e7f8eae939dc1efb4a3b541bb25d7\"" Sep 13 00:00:08.669546 containerd[1571]: time="2025-09-13T00:00:08.669497744Z" level=info msg="StartContainer for \"916c6d32e3527b09ce4202a3e41cb10eb69e7f8eae939dc1efb4a3b541bb25d7\"" Sep 13 00:00:08.693820 containerd[1571]: time="2025-09-13T00:00:08.693683395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:08.693820 containerd[1571]: time="2025-09-13T00:00:08.693773406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:08.693820 containerd[1571]: time="2025-09-13T00:00:08.693794336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:08.694103 containerd[1571]: time="2025-09-13T00:00:08.693942436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:08.749417 containerd[1571]: time="2025-09-13T00:00:08.749356440Z" level=info msg="StartContainer for \"916c6d32e3527b09ce4202a3e41cb10eb69e7f8eae939dc1efb4a3b541bb25d7\" returns successfully" Sep 13 00:00:08.775900 containerd[1571]: time="2025-09-13T00:00:08.775831942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-7qtlb,Uid:0a817601-dfc9-43ba-a7f4-4a0cdebddd5f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"54e9210969b454268f955ba0432353e00368e46c4016e7817deef103d59168d5\"" Sep 13 00:00:08.779926 containerd[1571]: time="2025-09-13T00:00:08.779294666Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:00:09.340862 kubelet[2662]: E0913 00:00:09.340824 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:10.653615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346304821.mount: Deactivated successfully. Sep 13 00:00:10.924820 kubelet[2662]: E0913 00:00:10.924572 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:10.938345 kubelet[2662]: I0913 00:00:10.938027 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bxwnl" podStartSLOduration=2.937995105 podStartE2EDuration="2.937995105s" podCreationTimestamp="2025-09-13 00:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:09.350529431 +0000 UTC m=+7.144377877" watchObservedRunningTime="2025-09-13 00:00:10.937995105 +0000 UTC m=+8.731843551" Sep 13 00:00:11.220156 containerd[1571]: time="2025-09-13T00:00:11.220031328Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:11.221166 containerd[1571]: time="2025-09-13T00:00:11.221113635Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:00:11.222370 containerd[1571]: time="2025-09-13T00:00:11.222308917Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:11.225137 containerd[1571]: time="2025-09-13T00:00:11.225094517Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:11.226011 containerd[1571]: time="2025-09-13T00:00:11.225970864Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.446643467s" Sep 13 00:00:11.226011 containerd[1571]: time="2025-09-13T00:00:11.226000090Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:00:11.228681 containerd[1571]: time="2025-09-13T00:00:11.228645235Z" level=info msg="CreateContainer within sandbox \"54e9210969b454268f955ba0432353e00368e46c4016e7817deef103d59168d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:00:11.244282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141637526.mount: Deactivated successfully. Sep 13 00:00:11.246257 containerd[1571]: time="2025-09-13T00:00:11.246177624Z" level=info msg="CreateContainer within sandbox \"54e9210969b454268f955ba0432353e00368e46c4016e7817deef103d59168d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab430704b24683058cc06b47ac7b87c434775e25f5cce27fcaad19212f83ec43\"" Sep 13 00:00:11.246918 containerd[1571]: time="2025-09-13T00:00:11.246884271Z" level=info msg="StartContainer for \"ab430704b24683058cc06b47ac7b87c434775e25f5cce27fcaad19212f83ec43\"" Sep 13 00:00:11.640297 containerd[1571]: time="2025-09-13T00:00:11.640097940Z" level=info msg="StartContainer for \"ab430704b24683058cc06b47ac7b87c434775e25f5cce27fcaad19212f83ec43\" returns successfully" Sep 13 00:00:11.644093 kubelet[2662]: E0913 00:00:11.643647 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:13.058223 kubelet[2662]: E0913 00:00:13.057102 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:13.087751 kubelet[2662]: I0913 00:00:13.087671 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-7qtlb" podStartSLOduration=2.638386789 podStartE2EDuration="5.087648014s" podCreationTimestamp="2025-09-13 00:00:08 +0000 UTC" firstStartedPulling="2025-09-13 00:00:08.777722836 +0000 UTC m=+6.571571282" lastFinishedPulling="2025-09-13 00:00:11.226984061 +0000 UTC m=+9.020832507" observedRunningTime="2025-09-13 00:00:12.928470677 +0000 UTC m=+10.722319123" watchObservedRunningTime="2025-09-13 00:00:13.087648014 +0000 UTC m=+10.881496460" Sep 13 00:00:13.652214 kubelet[2662]: E0913 00:00:13.652167 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:14.070865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab430704b24683058cc06b47ac7b87c434775e25f5cce27fcaad19212f83ec43-rootfs.mount: Deactivated successfully. Sep 13 00:00:14.079467 containerd[1571]: time="2025-09-13T00:00:14.079058912Z" level=info msg="shim disconnected" id=ab430704b24683058cc06b47ac7b87c434775e25f5cce27fcaad19212f83ec43 namespace=k8s.io Sep 13 00:00:14.079467 containerd[1571]: time="2025-09-13T00:00:14.079144023Z" level=warning msg="cleaning up after shim disconnected" id=ab430704b24683058cc06b47ac7b87c434775e25f5cce27fcaad19212f83ec43 namespace=k8s.io Sep 13 00:00:14.079467 containerd[1571]: time="2025-09-13T00:00:14.079161436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:00:14.655506 kubelet[2662]: I0913 00:00:14.655455 2662 scope.go:117] "RemoveContainer" containerID="ab430704b24683058cc06b47ac7b87c434775e25f5cce27fcaad19212f83ec43" Sep 13 00:00:14.657741 containerd[1571]: time="2025-09-13T00:00:14.657500970Z" level=info msg="CreateContainer within sandbox \"54e9210969b454268f955ba0432353e00368e46c4016e7817deef103d59168d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:00:14.674695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299437270.mount: Deactivated successfully. Sep 13 00:00:14.843200 containerd[1571]: time="2025-09-13T00:00:14.843132374Z" level=info msg="CreateContainer within sandbox \"54e9210969b454268f955ba0432353e00368e46c4016e7817deef103d59168d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2ba3ccf7f3a16c452f7cdb73bf2d4aa9ab12573d4236aa3a2fd19c6ba3464363\"" Sep 13 00:00:14.844186 containerd[1571]: time="2025-09-13T00:00:14.844125120Z" level=info msg="StartContainer for \"2ba3ccf7f3a16c452f7cdb73bf2d4aa9ab12573d4236aa3a2fd19c6ba3464363\"" Sep 13 00:00:14.909212 containerd[1571]: time="2025-09-13T00:00:14.908961114Z" level=info msg="StartContainer for \"2ba3ccf7f3a16c452f7cdb73bf2d4aa9ab12573d4236aa3a2fd19c6ba3464363\" returns successfully" Sep 13 00:00:17.207426 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 13 00:00:17.211636 sshd[1756]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:17.219651 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:46750.service: Deactivated successfully. Sep 13 00:00:17.224865 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:00:17.226275 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:00:17.228061 systemd-logind[1545]: Removed session 7. Sep 13 00:00:22.338468 kubelet[2662]: I0913 00:00:22.338395 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05111568-a13b-4655-a15d-d819dbfcfe4c-tigera-ca-bundle\") pod \"calico-typha-657d5fd7bd-w7ln2\" (UID: \"05111568-a13b-4655-a15d-d819dbfcfe4c\") " pod="calico-system/calico-typha-657d5fd7bd-w7ln2" Sep 13 00:00:22.338468 kubelet[2662]: I0913 00:00:22.338449 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05111568-a13b-4655-a15d-d819dbfcfe4c-typha-certs\") pod \"calico-typha-657d5fd7bd-w7ln2\" (UID: \"05111568-a13b-4655-a15d-d819dbfcfe4c\") " pod="calico-system/calico-typha-657d5fd7bd-w7ln2" Sep 13 00:00:22.338468 kubelet[2662]: I0913 00:00:22.338473 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghzp\" (UniqueName: \"kubernetes.io/projected/05111568-a13b-4655-a15d-d819dbfcfe4c-kube-api-access-sghzp\") pod \"calico-typha-657d5fd7bd-w7ln2\" (UID: \"05111568-a13b-4655-a15d-d819dbfcfe4c\") " pod="calico-system/calico-typha-657d5fd7bd-w7ln2" Sep 13 00:00:22.539790 kubelet[2662]: I0913 00:00:22.539716 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-flexvol-driver-host\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.539790 kubelet[2662]: I0913 00:00:22.539790 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-var-lib-calico\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540140 kubelet[2662]: I0913 00:00:22.539814 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f8bs\" (UniqueName: \"kubernetes.io/projected/302d68a1-0e2a-4e1c-9d5e-89c672df130e-kube-api-access-5f8bs\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540140 kubelet[2662]: I0913 00:00:22.539841 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/302d68a1-0e2a-4e1c-9d5e-89c672df130e-node-certs\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540140 kubelet[2662]: I0913 00:00:22.539981 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-var-run-calico\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540140 kubelet[2662]: I0913 00:00:22.540080 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-cni-bin-dir\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540140 kubelet[2662]: I0913 00:00:22.540107 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-lib-modules\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540277 kubelet[2662]: I0913 00:00:22.540128 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-policysync\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540277 kubelet[2662]: I0913 00:00:22.540166 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-xtables-lock\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540277 kubelet[2662]: I0913 00:00:22.540189 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-cni-log-dir\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540277 kubelet[2662]: I0913 00:00:22.540219 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/302d68a1-0e2a-4e1c-9d5e-89c672df130e-cni-net-dir\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.540277 kubelet[2662]: I0913 00:00:22.540242 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302d68a1-0e2a-4e1c-9d5e-89c672df130e-tigera-ca-bundle\") pod \"calico-node-x6l8h\" (UID: \"302d68a1-0e2a-4e1c-9d5e-89c672df130e\") " pod="calico-system/calico-node-x6l8h" Sep 13 00:00:22.577179 kubelet[2662]: E0913 00:00:22.577114 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:22.578160 containerd[1571]: time="2025-09-13T00:00:22.578093752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-657d5fd7bd-w7ln2,Uid:05111568-a13b-4655-a15d-d819dbfcfe4c,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:22.633279 containerd[1571]: time="2025-09-13T00:00:22.633093965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:22.633279 containerd[1571]: time="2025-09-13T00:00:22.633155832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:22.633279 containerd[1571]: time="2025-09-13T00:00:22.633170048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:22.633649 containerd[1571]: time="2025-09-13T00:00:22.633268544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:22.642826 kubelet[2662]: E0913 00:00:22.642668 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.642826 kubelet[2662]: W0913 00:00:22.642690 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.642986 kubelet[2662]: E0913 00:00:22.642835 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.645103 kubelet[2662]: E0913 00:00:22.643119 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.645103 kubelet[2662]: W0913 00:00:22.643133 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.645103 kubelet[2662]: E0913 00:00:22.643144 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.645103 kubelet[2662]: E0913 00:00:22.643416 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.645103 kubelet[2662]: W0913 00:00:22.643425 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.645103 kubelet[2662]: E0913 00:00:22.643435 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.645103 kubelet[2662]: E0913 00:00:22.644024 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.645103 kubelet[2662]: W0913 00:00:22.644034 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.645103 kubelet[2662]: E0913 00:00:22.644058 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.645569 kubelet[2662]: E0913 00:00:22.645449 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.645569 kubelet[2662]: W0913 00:00:22.645461 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.645569 kubelet[2662]: E0913 00:00:22.645471 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.654968 kubelet[2662]: E0913 00:00:22.654908 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.654968 kubelet[2662]: W0913 00:00:22.654961 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.655255 kubelet[2662]: E0913 00:00:22.654990 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.655255 kubelet[2662]: E0913 00:00:22.655237 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.655255 kubelet[2662]: W0913 00:00:22.655247 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.655360 kubelet[2662]: E0913 00:00:22.655257 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.679391 kubelet[2662]: E0913 00:00:22.679063 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:22.732127 containerd[1571]: time="2025-09-13T00:00:22.732075976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-657d5fd7bd-w7ln2,Uid:05111568-a13b-4655-a15d-d819dbfcfe4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"05fb59fbcb77fe672b62b59c171c34a67308b20ba7403653818b08330f6c435f\"" Sep 13 00:00:22.737338 kubelet[2662]: E0913 00:00:22.737296 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:22.738757 kubelet[2662]: E0913 00:00:22.738731 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.738757 kubelet[2662]: W0913 00:00:22.738754 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.738894 kubelet[2662]: E0913 00:00:22.738778 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.739263 kubelet[2662]: E0913 00:00:22.739240 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.739263 kubelet[2662]: W0913 00:00:22.739260 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.739369 kubelet[2662]: E0913 00:00:22.739274 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.739626 kubelet[2662]: E0913 00:00:22.739606 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.739626 kubelet[2662]: W0913 00:00:22.739621 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.739742 kubelet[2662]: E0913 00:00:22.739634 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.739974 kubelet[2662]: E0913 00:00:22.739933 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.739974 kubelet[2662]: W0913 00:00:22.739965 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.739974 kubelet[2662]: E0913 00:00:22.739978 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.740377 kubelet[2662]: E0913 00:00:22.740359 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.740377 kubelet[2662]: W0913 00:00:22.740373 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.740476 kubelet[2662]: E0913 00:00:22.740389 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.740746 kubelet[2662]: E0913 00:00:22.740720 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.740746 kubelet[2662]: W0913 00:00:22.740744 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.740868 kubelet[2662]: E0913 00:00:22.740771 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.741243 kubelet[2662]: E0913 00:00:22.741094 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.741243 kubelet[2662]: W0913 00:00:22.741112 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.741243 kubelet[2662]: E0913 00:00:22.741127 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.741448 kubelet[2662]: E0913 00:00:22.741417 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.741448 kubelet[2662]: W0913 00:00:22.741433 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.741448 kubelet[2662]: E0913 00:00:22.741446 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.741759 kubelet[2662]: E0913 00:00:22.741742 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.741759 kubelet[2662]: W0913 00:00:22.741757 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.741818 kubelet[2662]: E0913 00:00:22.741770 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.742019 kubelet[2662]: E0913 00:00:22.742002 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.742107 kubelet[2662]: W0913 00:00:22.742017 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.742107 kubelet[2662]: E0913 00:00:22.742065 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.742351 kubelet[2662]: E0913 00:00:22.742332 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.742351 kubelet[2662]: W0913 00:00:22.742347 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.742416 kubelet[2662]: E0913 00:00:22.742360 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.742628 kubelet[2662]: E0913 00:00:22.742597 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.742628 kubelet[2662]: W0913 00:00:22.742612 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.742628 kubelet[2662]: E0913 00:00:22.742624 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.742906 kubelet[2662]: E0913 00:00:22.742888 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.742906 kubelet[2662]: W0913 00:00:22.742903 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.742995 kubelet[2662]: E0913 00:00:22.742916 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.743331 kubelet[2662]: E0913 00:00:22.743283 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.743331 kubelet[2662]: W0913 00:00:22.743300 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.743331 kubelet[2662]: E0913 00:00:22.743314 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.743635 kubelet[2662]: E0913 00:00:22.743545 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.743635 kubelet[2662]: W0913 00:00:22.743566 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.743635 kubelet[2662]: E0913 00:00:22.743577 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.743744 containerd[1571]: time="2025-09-13T00:00:22.743525351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:00:22.743825 kubelet[2662]: E0913 00:00:22.743807 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.743825 kubelet[2662]: W0913 00:00:22.743822 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.743922 kubelet[2662]: E0913 00:00:22.743833 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.744221 kubelet[2662]: E0913 00:00:22.744192 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.744221 kubelet[2662]: W0913 00:00:22.744211 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.744286 kubelet[2662]: E0913 00:00:22.744226 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.744516 kubelet[2662]: E0913 00:00:22.744499 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.744516 kubelet[2662]: W0913 00:00:22.744513 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.744516 kubelet[2662]: E0913 00:00:22.744525 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.744822 kubelet[2662]: E0913 00:00:22.744805 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.744822 kubelet[2662]: W0913 00:00:22.744819 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.744904 kubelet[2662]: E0913 00:00:22.744831 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.745159 kubelet[2662]: E0913 00:00:22.745131 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.745159 kubelet[2662]: W0913 00:00:22.745156 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.745299 kubelet[2662]: E0913 00:00:22.745169 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.745507 kubelet[2662]: E0913 00:00:22.745489 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.745507 kubelet[2662]: W0913 00:00:22.745504 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.745579 kubelet[2662]: E0913 00:00:22.745516 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.745579 kubelet[2662]: I0913 00:00:22.745552 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k5dj\" (UniqueName: \"kubernetes.io/projected/ba4c914c-84cd-4650-afa0-e3d23d56f99f-kube-api-access-6k5dj\") pod \"csi-node-driver-fqbfr\" (UID: \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\") " pod="calico-system/csi-node-driver-fqbfr" Sep 13 00:00:22.745806 kubelet[2662]: E0913 00:00:22.745788 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.745806 kubelet[2662]: W0913 00:00:22.745803 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.745866 kubelet[2662]: E0913 00:00:22.745819 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.745866 kubelet[2662]: I0913 00:00:22.745838 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ba4c914c-84cd-4650-afa0-e3d23d56f99f-socket-dir\") pod \"csi-node-driver-fqbfr\" (UID: \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\") " pod="calico-system/csi-node-driver-fqbfr" Sep 13 00:00:22.746160 kubelet[2662]: E0913 00:00:22.746141 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.746160 kubelet[2662]: W0913 00:00:22.746159 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.746243 kubelet[2662]: E0913 00:00:22.746178 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.746421 kubelet[2662]: E0913 00:00:22.746405 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.746421 kubelet[2662]: W0913 00:00:22.746418 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.746502 kubelet[2662]: E0913 00:00:22.746432 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.746704 kubelet[2662]: E0913 00:00:22.746687 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.746704 kubelet[2662]: W0913 00:00:22.746704 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.746788 kubelet[2662]: E0913 00:00:22.746719 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.746788 kubelet[2662]: I0913 00:00:22.746751 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ba4c914c-84cd-4650-afa0-e3d23d56f99f-registration-dir\") pod \"csi-node-driver-fqbfr\" (UID: \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\") " pod="calico-system/csi-node-driver-fqbfr" Sep 13 00:00:22.747023 kubelet[2662]: E0913 00:00:22.746994 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.747023 kubelet[2662]: W0913 00:00:22.747011 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.747112 kubelet[2662]: E0913 00:00:22.747028 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.747332 kubelet[2662]: E0913 00:00:22.747311 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.747332 kubelet[2662]: W0913 00:00:22.747329 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.747390 kubelet[2662]: E0913 00:00:22.747349 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.747390 kubelet[2662]: I0913 00:00:22.747372 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba4c914c-84cd-4650-afa0-e3d23d56f99f-kubelet-dir\") pod \"csi-node-driver-fqbfr\" (UID: \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\") " pod="calico-system/csi-node-driver-fqbfr" Sep 13 00:00:22.747622 kubelet[2662]: E0913 00:00:22.747603 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.747653 kubelet[2662]: W0913 00:00:22.747621 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.747653 kubelet[2662]: E0913 00:00:22.747638 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.747912 kubelet[2662]: E0913 00:00:22.747895 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.747936 kubelet[2662]: W0913 00:00:22.747910 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.747936 kubelet[2662]: E0913 00:00:22.747924 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.748241 kubelet[2662]: E0913 00:00:22.748220 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.748241 kubelet[2662]: W0913 00:00:22.748237 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.748322 kubelet[2662]: E0913 00:00:22.748256 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.748517 kubelet[2662]: E0913 00:00:22.748496 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.748517 kubelet[2662]: W0913 00:00:22.748512 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.748591 kubelet[2662]: E0913 00:00:22.748524 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.748899 kubelet[2662]: E0913 00:00:22.748877 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.748899 kubelet[2662]: W0913 00:00:22.748894 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.748979 kubelet[2662]: E0913 00:00:22.748908 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.749219 kubelet[2662]: E0913 00:00:22.749200 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.749219 kubelet[2662]: W0913 00:00:22.749214 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.749292 kubelet[2662]: E0913 00:00:22.749226 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.749292 kubelet[2662]: I0913 00:00:22.749251 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ba4c914c-84cd-4650-afa0-e3d23d56f99f-varrun\") pod \"csi-node-driver-fqbfr\" (UID: \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\") " pod="calico-system/csi-node-driver-fqbfr" Sep 13 00:00:22.749516 kubelet[2662]: E0913 00:00:22.749498 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.749516 kubelet[2662]: W0913 00:00:22.749512 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.749585 kubelet[2662]: E0913 00:00:22.749523 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.749775 kubelet[2662]: E0913 00:00:22.749758 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.749775 kubelet[2662]: W0913 00:00:22.749771 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.749828 kubelet[2662]: E0913 00:00:22.749782 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.785020 containerd[1571]: time="2025-09-13T00:00:22.784946998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6l8h,Uid:302d68a1-0e2a-4e1c-9d5e-89c672df130e,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:22.850788 kubelet[2662]: E0913 00:00:22.850727 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.850788 kubelet[2662]: W0913 00:00:22.850762 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.850788 kubelet[2662]: E0913 00:00:22.850791 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.852181 kubelet[2662]: E0913 00:00:22.851319 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.852181 kubelet[2662]: W0913 00:00:22.851334 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.852181 kubelet[2662]: E0913 00:00:22.851354 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.852181 kubelet[2662]: E0913 00:00:22.851718 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.852181 kubelet[2662]: W0913 00:00:22.851752 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.852181 kubelet[2662]: E0913 00:00:22.851801 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.852181 kubelet[2662]: E0913 00:00:22.852101 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.852181 kubelet[2662]: W0913 00:00:22.852115 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.852181 kubelet[2662]: E0913 00:00:22.852136 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.852475 kubelet[2662]: E0913 00:00:22.852410 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.852475 kubelet[2662]: W0913 00:00:22.852420 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.852475 kubelet[2662]: E0913 00:00:22.852439 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.852798 kubelet[2662]: E0913 00:00:22.852779 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.852798 kubelet[2662]: W0913 00:00:22.852793 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.852905 kubelet[2662]: E0913 00:00:22.852851 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.853184 kubelet[2662]: E0913 00:00:22.853148 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.853184 kubelet[2662]: W0913 00:00:22.853165 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.853429 kubelet[2662]: E0913 00:00:22.853217 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.853494 kubelet[2662]: E0913 00:00:22.853457 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.853494 kubelet[2662]: W0913 00:00:22.853476 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.853494 kubelet[2662]: E0913 00:00:22.853497 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.853834 kubelet[2662]: E0913 00:00:22.853805 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.853834 kubelet[2662]: W0913 00:00:22.853832 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.853903 kubelet[2662]: E0913 00:00:22.853852 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.854177 kubelet[2662]: E0913 00:00:22.854157 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.854177 kubelet[2662]: W0913 00:00:22.854172 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.854275 kubelet[2662]: E0913 00:00:22.854190 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.854716 kubelet[2662]: E0913 00:00:22.854663 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.854716 kubelet[2662]: W0913 00:00:22.854707 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.854791 kubelet[2662]: E0913 00:00:22.854749 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.855030 kubelet[2662]: E0913 00:00:22.855005 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.855030 kubelet[2662]: W0913 00:00:22.855020 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.855119 kubelet[2662]: E0913 00:00:22.855081 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.855301 kubelet[2662]: E0913 00:00:22.855284 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.855301 kubelet[2662]: W0913 00:00:22.855298 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.855371 kubelet[2662]: E0913 00:00:22.855342 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.855666 kubelet[2662]: E0913 00:00:22.855637 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.855666 kubelet[2662]: W0913 00:00:22.855650 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.855744 kubelet[2662]: E0913 00:00:22.855726 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.855929 kubelet[2662]: E0913 00:00:22.855912 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.855929 kubelet[2662]: W0913 00:00:22.855926 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.855989 kubelet[2662]: E0913 00:00:22.855969 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.856213 kubelet[2662]: E0913 00:00:22.856196 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.856213 kubelet[2662]: W0913 00:00:22.856209 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.856280 kubelet[2662]: E0913 00:00:22.856231 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.856590 kubelet[2662]: E0913 00:00:22.856570 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.856590 kubelet[2662]: W0913 00:00:22.856587 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.856686 kubelet[2662]: E0913 00:00:22.856606 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.856907 kubelet[2662]: E0913 00:00:22.856876 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.856907 kubelet[2662]: W0913 00:00:22.856897 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.856988 kubelet[2662]: E0913 00:00:22.856917 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.857188 kubelet[2662]: E0913 00:00:22.857168 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.857188 kubelet[2662]: W0913 00:00:22.857182 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.857252 kubelet[2662]: E0913 00:00:22.857200 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.857504 kubelet[2662]: E0913 00:00:22.857482 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.857504 kubelet[2662]: W0913 00:00:22.857499 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.857603 kubelet[2662]: E0913 00:00:22.857519 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.857792 kubelet[2662]: E0913 00:00:22.857774 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.857792 kubelet[2662]: W0913 00:00:22.857789 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.857856 kubelet[2662]: E0913 00:00:22.857823 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.858109 kubelet[2662]: E0913 00:00:22.858089 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.858109 kubelet[2662]: W0913 00:00:22.858105 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.858250 kubelet[2662]: E0913 00:00:22.858141 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.858458 kubelet[2662]: E0913 00:00:22.858434 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.858483 kubelet[2662]: W0913 00:00:22.858456 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.858516 kubelet[2662]: E0913 00:00:22.858495 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.858845 kubelet[2662]: E0913 00:00:22.858815 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.858845 kubelet[2662]: W0913 00:00:22.858834 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.858993 kubelet[2662]: E0913 00:00:22.858944 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.859294 kubelet[2662]: E0913 00:00:22.859268 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.859294 kubelet[2662]: W0913 00:00:22.859289 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.859382 kubelet[2662]: E0913 00:00:22.859305 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.956146 kubelet[2662]: E0913 00:00:22.956105 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.956297 kubelet[2662]: W0913 00:00:22.956158 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.956297 kubelet[2662]: E0913 00:00:22.956191 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:22.977418 kubelet[2662]: E0913 00:00:22.977379 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:22.977418 kubelet[2662]: W0913 00:00:22.977403 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:22.977418 kubelet[2662]: E0913 00:00:22.977429 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:23.365925 containerd[1571]: time="2025-09-13T00:00:23.365626487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:23.365925 containerd[1571]: time="2025-09-13T00:00:23.365707950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:23.365925 containerd[1571]: time="2025-09-13T00:00:23.365719252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:23.365925 containerd[1571]: time="2025-09-13T00:00:23.365838116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:23.417634 containerd[1571]: time="2025-09-13T00:00:23.417565337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6l8h,Uid:302d68a1-0e2a-4e1c-9d5e-89c672df130e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bd5c0230ea6c75cbfdb4f9126e5ca01c40156add257e4813e858decf9032876\"" Sep 13 00:00:24.312413 kubelet[2662]: E0913 00:00:24.312335 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:26.312283 kubelet[2662]: E0913 00:00:26.312199 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:27.889442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783229667.mount: Deactivated successfully. Sep 13 00:00:28.315426 kubelet[2662]: E0913 00:00:28.315364 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:28.732838 containerd[1571]: time="2025-09-13T00:00:28.731877630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:28.733689 containerd[1571]: time="2025-09-13T00:00:28.733658019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:00:28.735077 containerd[1571]: time="2025-09-13T00:00:28.734999102Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:28.737507 containerd[1571]: time="2025-09-13T00:00:28.737469328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:28.738118 containerd[1571]: time="2025-09-13T00:00:28.738090186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.994511796s" Sep 13 00:00:28.738198 containerd[1571]: time="2025-09-13T00:00:28.738121094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:00:28.748755 containerd[1571]: time="2025-09-13T00:00:28.748717655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:00:28.776797 containerd[1571]: time="2025-09-13T00:00:28.776737056Z" level=info msg="CreateContainer within sandbox \"05fb59fbcb77fe672b62b59c171c34a67308b20ba7403653818b08330f6c435f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:00:28.795156 containerd[1571]: time="2025-09-13T00:00:28.795100165Z" level=info msg="CreateContainer within sandbox \"05fb59fbcb77fe672b62b59c171c34a67308b20ba7403653818b08330f6c435f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8fd9f0314cb8529937daa1abb6cebf76c3614fcc90c0b0d6dc062aa7855c9786\"" Sep 13 00:00:28.798577 containerd[1571]: time="2025-09-13T00:00:28.798545194Z" level=info msg="StartContainer for \"8fd9f0314cb8529937daa1abb6cebf76c3614fcc90c0b0d6dc062aa7855c9786\"" Sep 13 00:00:28.988366 containerd[1571]: time="2025-09-13T00:00:28.988129890Z" level=info msg="StartContainer for \"8fd9f0314cb8529937daa1abb6cebf76c3614fcc90c0b0d6dc062aa7855c9786\" returns successfully" Sep 13 00:00:29.716886 kubelet[2662]: E0913 00:00:29.716843 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:29.793444 kubelet[2662]: E0913 00:00:29.793377 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.793444 kubelet[2662]: W0913 00:00:29.793424 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.793652 kubelet[2662]: E0913 00:00:29.793463 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.793909 kubelet[2662]: E0913 00:00:29.793890 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.793946 kubelet[2662]: W0913 00:00:29.793908 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.793946 kubelet[2662]: E0913 00:00:29.793922 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.794218 kubelet[2662]: E0913 00:00:29.794200 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.794218 kubelet[2662]: W0913 00:00:29.794215 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.794286 kubelet[2662]: E0913 00:00:29.794229 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.794664 kubelet[2662]: E0913 00:00:29.794636 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.794664 kubelet[2662]: W0913 00:00:29.794656 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.794718 kubelet[2662]: E0913 00:00:29.794670 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.794966 kubelet[2662]: E0913 00:00:29.794947 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.794966 kubelet[2662]: W0913 00:00:29.794964 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.795034 kubelet[2662]: E0913 00:00:29.794978 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.795294 kubelet[2662]: E0913 00:00:29.795260 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.795294 kubelet[2662]: W0913 00:00:29.795280 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.795294 kubelet[2662]: E0913 00:00:29.795293 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.795609 kubelet[2662]: E0913 00:00:29.795577 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.795609 kubelet[2662]: W0913 00:00:29.795597 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.795671 kubelet[2662]: E0913 00:00:29.795610 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.795940 kubelet[2662]: E0913 00:00:29.795921 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.795940 kubelet[2662]: W0913 00:00:29.795937 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.796003 kubelet[2662]: E0913 00:00:29.795951 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.796291 kubelet[2662]: E0913 00:00:29.796263 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.796291 kubelet[2662]: W0913 00:00:29.796283 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.796374 kubelet[2662]: E0913 00:00:29.796297 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.796622 kubelet[2662]: E0913 00:00:29.796560 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.796622 kubelet[2662]: W0913 00:00:29.796581 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.796622 kubelet[2662]: E0913 00:00:29.796596 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.796880 kubelet[2662]: E0913 00:00:29.796851 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.796880 kubelet[2662]: W0913 00:00:29.796864 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.796962 kubelet[2662]: E0913 00:00:29.796878 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.797410 kubelet[2662]: E0913 00:00:29.797203 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.797410 kubelet[2662]: W0913 00:00:29.797237 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.797410 kubelet[2662]: E0913 00:00:29.797255 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.797725 kubelet[2662]: E0913 00:00:29.797518 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.797725 kubelet[2662]: W0913 00:00:29.797531 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.797725 kubelet[2662]: E0913 00:00:29.797544 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.797847 kubelet[2662]: E0913 00:00:29.797782 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.797847 kubelet[2662]: W0913 00:00:29.797796 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.797847 kubelet[2662]: E0913 00:00:29.797809 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.798115 kubelet[2662]: E0913 00:00:29.798066 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.798115 kubelet[2662]: W0913 00:00:29.798083 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.798115 kubelet[2662]: E0913 00:00:29.798096 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.798637 kubelet[2662]: E0913 00:00:29.798615 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.798637 kubelet[2662]: W0913 00:00:29.798632 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.798722 kubelet[2662]: E0913 00:00:29.798649 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.798958 kubelet[2662]: E0913 00:00:29.798934 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.798958 kubelet[2662]: W0913 00:00:29.798952 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.799126 kubelet[2662]: E0913 00:00:29.798972 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.799435 kubelet[2662]: E0913 00:00:29.799380 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.799435 kubelet[2662]: W0913 00:00:29.799420 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.799671 kubelet[2662]: E0913 00:00:29.799461 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.799869 kubelet[2662]: E0913 00:00:29.799845 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.799869 kubelet[2662]: W0913 00:00:29.799867 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.799972 kubelet[2662]: E0913 00:00:29.799890 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.800196 kubelet[2662]: E0913 00:00:29.800177 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.800196 kubelet[2662]: W0913 00:00:29.800193 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.800290 kubelet[2662]: E0913 00:00:29.800213 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.800480 kubelet[2662]: E0913 00:00:29.800461 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.800480 kubelet[2662]: W0913 00:00:29.800475 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.800587 kubelet[2662]: E0913 00:00:29.800529 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.800792 kubelet[2662]: E0913 00:00:29.800761 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.800792 kubelet[2662]: W0913 00:00:29.800789 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.800912 kubelet[2662]: E0913 00:00:29.800885 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.801150 kubelet[2662]: E0913 00:00:29.801132 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.801150 kubelet[2662]: W0913 00:00:29.801147 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.801281 kubelet[2662]: E0913 00:00:29.801176 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.801454 kubelet[2662]: E0913 00:00:29.801435 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.801454 kubelet[2662]: W0913 00:00:29.801449 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.801535 kubelet[2662]: E0913 00:00:29.801467 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.801837 kubelet[2662]: E0913 00:00:29.801815 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.801837 kubelet[2662]: W0913 00:00:29.801832 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.801914 kubelet[2662]: E0913 00:00:29.801850 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.802181 kubelet[2662]: E0913 00:00:29.802159 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.802181 kubelet[2662]: W0913 00:00:29.802179 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.802256 kubelet[2662]: E0913 00:00:29.802201 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.802479 kubelet[2662]: E0913 00:00:29.802464 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.802479 kubelet[2662]: W0913 00:00:29.802474 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.802558 kubelet[2662]: E0913 00:00:29.802489 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.802788 kubelet[2662]: E0913 00:00:29.802772 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.802788 kubelet[2662]: W0913 00:00:29.802785 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.802843 kubelet[2662]: E0913 00:00:29.802802 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.803177 kubelet[2662]: E0913 00:00:29.803157 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.803177 kubelet[2662]: W0913 00:00:29.803174 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.803249 kubelet[2662]: E0913 00:00:29.803193 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.803479 kubelet[2662]: E0913 00:00:29.803457 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.803479 kubelet[2662]: W0913 00:00:29.803475 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.803569 kubelet[2662]: E0913 00:00:29.803492 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.803784 kubelet[2662]: E0913 00:00:29.803764 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.803784 kubelet[2662]: W0913 00:00:29.803778 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.803864 kubelet[2662]: E0913 00:00:29.803794 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.804238 kubelet[2662]: E0913 00:00:29.804194 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.804238 kubelet[2662]: W0913 00:00:29.804226 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.804442 kubelet[2662]: E0913 00:00:29.804271 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.804614 kubelet[2662]: E0913 00:00:29.804583 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:29.804614 kubelet[2662]: W0913 00:00:29.804601 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:29.804702 kubelet[2662]: E0913 00:00:29.804619 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:29.894932 kubelet[2662]: I0913 00:00:29.894649 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-657d5fd7bd-w7ln2" podStartSLOduration=1.8840491529999999 podStartE2EDuration="7.894625579s" podCreationTimestamp="2025-09-13 00:00:22 +0000 UTC" firstStartedPulling="2025-09-13 00:00:22.737969967 +0000 UTC m=+20.531818413" lastFinishedPulling="2025-09-13 00:00:28.748546383 +0000 UTC m=+26.542394839" observedRunningTime="2025-09-13 00:00:29.894252267 +0000 UTC m=+27.688100723" watchObservedRunningTime="2025-09-13 00:00:29.894625579 +0000 UTC m=+27.688474025" Sep 13 00:00:30.312558 kubelet[2662]: E0913 00:00:30.312449 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:30.718976 kubelet[2662]: I0913 00:00:30.718926 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:00:30.720126 kubelet[2662]: E0913 00:00:30.720007 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:30.806785 kubelet[2662]: E0913 00:00:30.806709 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.806785 kubelet[2662]: W0913 00:00:30.806749 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.806785 kubelet[2662]: E0913 00:00:30.806783 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.807292 kubelet[2662]: E0913 00:00:30.807258 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.807292 kubelet[2662]: W0913 00:00:30.807277 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.807292 kubelet[2662]: E0913 00:00:30.807290 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.808324 kubelet[2662]: E0913 00:00:30.807753 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.808324 kubelet[2662]: W0913 00:00:30.807772 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.808324 kubelet[2662]: E0913 00:00:30.807786 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.808324 kubelet[2662]: E0913 00:00:30.808056 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.808324 kubelet[2662]: W0913 00:00:30.808068 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.808324 kubelet[2662]: E0913 00:00:30.808081 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.808573 kubelet[2662]: E0913 00:00:30.808356 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.808573 kubelet[2662]: W0913 00:00:30.808372 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.808573 kubelet[2662]: E0913 00:00:30.808386 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.808822 kubelet[2662]: E0913 00:00:30.808804 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.808822 kubelet[2662]: W0913 00:00:30.808820 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.808912 kubelet[2662]: E0913 00:00:30.808834 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.809139 kubelet[2662]: E0913 00:00:30.809120 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.809139 kubelet[2662]: W0913 00:00:30.809133 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.809252 kubelet[2662]: E0913 00:00:30.809146 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.809416 kubelet[2662]: E0913 00:00:30.809398 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.809416 kubelet[2662]: W0913 00:00:30.809411 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.809512 kubelet[2662]: E0913 00:00:30.809424 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.809751 kubelet[2662]: E0913 00:00:30.809724 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.809751 kubelet[2662]: W0913 00:00:30.809738 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.809751 kubelet[2662]: E0913 00:00:30.809750 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.810011 kubelet[2662]: E0913 00:00:30.809993 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.810011 kubelet[2662]: W0913 00:00:30.810005 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.810118 kubelet[2662]: E0913 00:00:30.810020 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.810311 kubelet[2662]: E0913 00:00:30.810293 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.810311 kubelet[2662]: W0913 00:00:30.810306 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.810404 kubelet[2662]: E0913 00:00:30.810318 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.810576 kubelet[2662]: E0913 00:00:30.810557 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.810576 kubelet[2662]: W0913 00:00:30.810570 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.810644 kubelet[2662]: E0913 00:00:30.810582 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.810864 kubelet[2662]: E0913 00:00:30.810844 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.810864 kubelet[2662]: W0913 00:00:30.810857 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.810959 kubelet[2662]: E0913 00:00:30.810870 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.811144 kubelet[2662]: E0913 00:00:30.811126 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.811144 kubelet[2662]: W0913 00:00:30.811139 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.811233 kubelet[2662]: E0913 00:00:30.811151 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.811405 kubelet[2662]: E0913 00:00:30.811387 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.811405 kubelet[2662]: W0913 00:00:30.811400 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.811499 kubelet[2662]: E0913 00:00:30.811412 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.907407 kubelet[2662]: E0913 00:00:30.907361 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.907407 kubelet[2662]: W0913 00:00:30.907391 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.907407 kubelet[2662]: E0913 00:00:30.907418 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.907859 kubelet[2662]: E0913 00:00:30.907825 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.907859 kubelet[2662]: W0913 00:00:30.907844 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.907859 kubelet[2662]: E0913 00:00:30.907871 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.908356 kubelet[2662]: E0913 00:00:30.908312 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.908356 kubelet[2662]: W0913 00:00:30.908345 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.908451 kubelet[2662]: E0913 00:00:30.908384 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.908682 kubelet[2662]: E0913 00:00:30.908663 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.908682 kubelet[2662]: W0913 00:00:30.908678 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.908782 kubelet[2662]: E0913 00:00:30.908699 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.908981 kubelet[2662]: E0913 00:00:30.908964 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.908981 kubelet[2662]: W0913 00:00:30.908977 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.909089 kubelet[2662]: E0913 00:00:30.908996 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.909321 kubelet[2662]: E0913 00:00:30.909285 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.909321 kubelet[2662]: W0913 00:00:30.909305 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.909430 kubelet[2662]: E0913 00:00:30.909341 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.909618 kubelet[2662]: E0913 00:00:30.909598 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.909618 kubelet[2662]: W0913 00:00:30.909611 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.909691 kubelet[2662]: E0913 00:00:30.909661 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.909903 kubelet[2662]: E0913 00:00:30.909882 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.909903 kubelet[2662]: W0913 00:00:30.909899 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.909992 kubelet[2662]: E0913 00:00:30.909918 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.910248 kubelet[2662]: E0913 00:00:30.910225 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.910248 kubelet[2662]: W0913 00:00:30.910241 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.910341 kubelet[2662]: E0913 00:00:30.910257 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.910576 kubelet[2662]: E0913 00:00:30.910548 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.910624 kubelet[2662]: W0913 00:00:30.910574 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.910624 kubelet[2662]: E0913 00:00:30.910597 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.910853 kubelet[2662]: E0913 00:00:30.910833 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.910853 kubelet[2662]: W0913 00:00:30.910849 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.910919 kubelet[2662]: E0913 00:00:30.910868 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.911151 kubelet[2662]: E0913 00:00:30.911132 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.911151 kubelet[2662]: W0913 00:00:30.911148 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.911228 kubelet[2662]: E0913 00:00:30.911166 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.911415 kubelet[2662]: E0913 00:00:30.911396 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.911415 kubelet[2662]: W0913 00:00:30.911410 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.911482 kubelet[2662]: E0913 00:00:30.911426 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.911648 kubelet[2662]: E0913 00:00:30.911629 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.911648 kubelet[2662]: W0913 00:00:30.911646 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.911735 kubelet[2662]: E0913 00:00:30.911666 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.911976 kubelet[2662]: E0913 00:00:30.911954 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.911976 kubelet[2662]: W0913 00:00:30.911970 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.912061 kubelet[2662]: E0913 00:00:30.912003 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.912360 kubelet[2662]: E0913 00:00:30.912339 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.912360 kubelet[2662]: W0913 00:00:30.912356 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.912457 kubelet[2662]: E0913 00:00:30.912376 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.912684 kubelet[2662]: E0913 00:00:30.912665 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.912684 kubelet[2662]: W0913 00:00:30.912679 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.912760 kubelet[2662]: E0913 00:00:30.912698 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:30.912974 kubelet[2662]: E0913 00:00:30.912952 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:30.912974 kubelet[2662]: W0913 00:00:30.912968 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:30.913080 kubelet[2662]: E0913 00:00:30.912981 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.312561 kubelet[2662]: E0913 00:00:32.312481 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:32.879865 kubelet[2662]: I0913 00:00:32.879757 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:00:32.880456 kubelet[2662]: E0913 00:00:32.880399 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:32.933743 kubelet[2662]: E0913 00:00:32.933665 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.934154 kubelet[2662]: W0913 00:00:32.933707 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.934779 kubelet[2662]: E0913 00:00:32.933865 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.934984 kubelet[2662]: E0913 00:00:32.934878 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.934984 kubelet[2662]: W0913 00:00:32.934894 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.934984 kubelet[2662]: E0913 00:00:32.934919 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.935728 kubelet[2662]: E0913 00:00:32.935563 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.935728 kubelet[2662]: W0913 00:00:32.935584 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.935728 kubelet[2662]: E0913 00:00:32.935602 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.936260 kubelet[2662]: E0913 00:00:32.936088 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.936260 kubelet[2662]: W0913 00:00:32.936108 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.936260 kubelet[2662]: E0913 00:00:32.936122 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.936715 kubelet[2662]: E0913 00:00:32.936567 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.936715 kubelet[2662]: W0913 00:00:32.936583 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.936715 kubelet[2662]: E0913 00:00:32.936601 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.937325 kubelet[2662]: E0913 00:00:32.937268 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.937325 kubelet[2662]: W0913 00:00:32.937311 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.937458 kubelet[2662]: E0913 00:00:32.937355 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.938070 kubelet[2662]: E0913 00:00:32.937886 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.938070 kubelet[2662]: W0913 00:00:32.937931 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.938070 kubelet[2662]: E0913 00:00:32.937945 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.938841 kubelet[2662]: E0913 00:00:32.938464 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.938841 kubelet[2662]: W0913 00:00:32.938486 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.938841 kubelet[2662]: E0913 00:00:32.938526 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.939213 kubelet[2662]: E0913 00:00:32.939193 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.939311 kubelet[2662]: W0913 00:00:32.939290 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.939407 kubelet[2662]: E0913 00:00:32.939386 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.940262 kubelet[2662]: E0913 00:00:32.940082 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.940262 kubelet[2662]: W0913 00:00:32.940097 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.940262 kubelet[2662]: E0913 00:00:32.940112 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.941057 kubelet[2662]: E0913 00:00:32.940607 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.941057 kubelet[2662]: W0913 00:00:32.940635 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.941057 kubelet[2662]: E0913 00:00:32.940648 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.941260 kubelet[2662]: E0913 00:00:32.941227 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.941260 kubelet[2662]: W0913 00:00:32.941257 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.941471 kubelet[2662]: E0913 00:00:32.941284 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.941962 kubelet[2662]: E0913 00:00:32.941646 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.941962 kubelet[2662]: W0913 00:00:32.941661 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.941962 kubelet[2662]: E0913 00:00:32.941693 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.943145 kubelet[2662]: E0913 00:00:32.942882 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.943145 kubelet[2662]: W0913 00:00:32.942897 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.943145 kubelet[2662]: E0913 00:00:32.942911 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:32.943600 kubelet[2662]: E0913 00:00:32.943417 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:32.943600 kubelet[2662]: W0913 00:00:32.943442 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:32.943600 kubelet[2662]: E0913 00:00:32.943456 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.036584 kubelet[2662]: E0913 00:00:33.036493 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.036584 kubelet[2662]: W0913 00:00:33.036540 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.036584 kubelet[2662]: E0913 00:00:33.036575 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.039119 kubelet[2662]: E0913 00:00:33.039034 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.039119 kubelet[2662]: W0913 00:00:33.039091 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.039119 kubelet[2662]: E0913 00:00:33.039144 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.039119 kubelet[2662]: E0913 00:00:33.039755 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.039119 kubelet[2662]: W0913 00:00:33.039766 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.039119 kubelet[2662]: E0913 00:00:33.039853 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.039119 kubelet[2662]: E0913 00:00:33.040167 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.039119 kubelet[2662]: W0913 00:00:33.040178 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.039119 kubelet[2662]: E0913 00:00:33.040235 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.039119 kubelet[2662]: E0913 00:00:33.040454 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.041136 kubelet[2662]: W0913 00:00:33.040464 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.041136 kubelet[2662]: E0913 00:00:33.040480 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.044552 kubelet[2662]: E0913 00:00:33.044491 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.044552 kubelet[2662]: W0913 00:00:33.044526 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.044552 kubelet[2662]: E0913 00:00:33.044560 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.049536 kubelet[2662]: E0913 00:00:33.049479 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.049536 kubelet[2662]: W0913 00:00:33.049522 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.050217 kubelet[2662]: E0913 00:00:33.049890 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.050844 kubelet[2662]: E0913 00:00:33.050629 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.050844 kubelet[2662]: W0913 00:00:33.050659 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.053098 kubelet[2662]: E0913 00:00:33.051070 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.054008 kubelet[2662]: E0913 00:00:33.053398 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.054008 kubelet[2662]: W0913 00:00:33.053432 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.054008 kubelet[2662]: E0913 00:00:33.053755 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.054008 kubelet[2662]: W0913 00:00:33.053766 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.058218 kubelet[2662]: E0913 00:00:33.054322 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.058218 kubelet[2662]: W0913 00:00:33.055790 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.058949 kubelet[2662]: E0913 00:00:33.058545 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.058949 kubelet[2662]: E0913 00:00:33.054990 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.058949 kubelet[2662]: E0913 00:00:33.054950 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.058949 kubelet[2662]: E0913 00:00:33.058948 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.058949 kubelet[2662]: W0913 00:00:33.058967 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.060526 kubelet[2662]: E0913 00:00:33.058988 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.060526 kubelet[2662]: E0913 00:00:33.059328 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.060526 kubelet[2662]: W0913 00:00:33.059340 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.060526 kubelet[2662]: E0913 00:00:33.059363 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.060526 kubelet[2662]: E0913 00:00:33.059691 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.060526 kubelet[2662]: W0913 00:00:33.059702 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.060526 kubelet[2662]: E0913 00:00:33.059715 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.060526 kubelet[2662]: E0913 00:00:33.060103 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.060526 kubelet[2662]: W0913 00:00:33.060121 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.060526 kubelet[2662]: E0913 00:00:33.060153 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.067076 kubelet[2662]: E0913 00:00:33.065508 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.067076 kubelet[2662]: W0913 00:00:33.066456 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.068082 kubelet[2662]: E0913 00:00:33.067341 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.069372 kubelet[2662]: E0913 00:00:33.069297 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.069372 kubelet[2662]: W0913 00:00:33.069317 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.069372 kubelet[2662]: E0913 00:00:33.069331 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.070035 kubelet[2662]: E0913 00:00:33.070005 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.070035 kubelet[2662]: W0913 00:00:33.070022 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.070035 kubelet[2662]: E0913 00:00:33.070035 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.743440 kubelet[2662]: E0913 00:00:33.743383 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:33.756913 kubelet[2662]: E0913 00:00:33.756839 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.756913 kubelet[2662]: W0913 00:00:33.756874 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.756913 kubelet[2662]: E0913 00:00:33.756908 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.762940 kubelet[2662]: E0913 00:00:33.762897 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.762940 kubelet[2662]: W0913 00:00:33.762927 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.762940 kubelet[2662]: E0913 00:00:33.762954 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.763285 kubelet[2662]: E0913 00:00:33.763246 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.763285 kubelet[2662]: W0913 00:00:33.763274 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.763506 kubelet[2662]: E0913 00:00:33.763295 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.763686 kubelet[2662]: E0913 00:00:33.763598 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.763686 kubelet[2662]: W0913 00:00:33.763611 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.763686 kubelet[2662]: E0913 00:00:33.763625 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.764436 kubelet[2662]: E0913 00:00:33.764299 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.764436 kubelet[2662]: W0913 00:00:33.764325 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.764436 kubelet[2662]: E0913 00:00:33.764341 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.766424 kubelet[2662]: E0913 00:00:33.765729 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.766424 kubelet[2662]: W0913 00:00:33.765902 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.766424 kubelet[2662]: E0913 00:00:33.765917 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.767031 kubelet[2662]: E0913 00:00:33.766939 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.767244 kubelet[2662]: W0913 00:00:33.767116 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.767244 kubelet[2662]: E0913 00:00:33.767134 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.769444 kubelet[2662]: E0913 00:00:33.767978 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.769444 kubelet[2662]: W0913 00:00:33.768194 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.772432 kubelet[2662]: E0913 00:00:33.770958 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.774486 kubelet[2662]: E0913 00:00:33.773288 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.777433 kubelet[2662]: W0913 00:00:33.776452 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.779058 kubelet[2662]: E0913 00:00:33.778488 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.782464 kubelet[2662]: E0913 00:00:33.779631 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.782464 kubelet[2662]: W0913 00:00:33.780248 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.784435 kubelet[2662]: E0913 00:00:33.783468 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.789476 kubelet[2662]: E0913 00:00:33.786320 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.789476 kubelet[2662]: W0913 00:00:33.787000 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.789921 kubelet[2662]: E0913 00:00:33.789778 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.792924 kubelet[2662]: E0913 00:00:33.790718 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.792924 kubelet[2662]: W0913 00:00:33.791876 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.793357 kubelet[2662]: E0913 00:00:33.793051 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.796518 kubelet[2662]: E0913 00:00:33.796489 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.796518 kubelet[2662]: W0913 00:00:33.796510 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.796652 kubelet[2662]: E0913 00:00:33.796526 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.800902 kubelet[2662]: E0913 00:00:33.796854 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.800902 kubelet[2662]: W0913 00:00:33.800542 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.803210 kubelet[2662]: E0913 00:00:33.801823 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.805579 kubelet[2662]: E0913 00:00:33.804800 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.806908 kubelet[2662]: W0913 00:00:33.806445 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.816977 kubelet[2662]: E0913 00:00:33.812382 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.859234 kubelet[2662]: E0913 00:00:33.857078 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.859234 kubelet[2662]: W0913 00:00:33.857816 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.862712 kubelet[2662]: E0913 00:00:33.862481 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.866063 kubelet[2662]: E0913 00:00:33.863966 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.866063 kubelet[2662]: W0913 00:00:33.864664 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.868432 kubelet[2662]: E0913 00:00:33.867499 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.870020 kubelet[2662]: E0913 00:00:33.869986 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.870020 kubelet[2662]: W0913 00:00:33.870012 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.870443 kubelet[2662]: E0913 00:00:33.870357 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.874537 kubelet[2662]: E0913 00:00:33.874473 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.874537 kubelet[2662]: W0913 00:00:33.874512 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.874668 kubelet[2662]: E0913 00:00:33.874551 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.876189 kubelet[2662]: E0913 00:00:33.876152 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.876189 kubelet[2662]: W0913 00:00:33.876179 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.876357 kubelet[2662]: E0913 00:00:33.876324 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.879558 kubelet[2662]: E0913 00:00:33.879521 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.879558 kubelet[2662]: W0913 00:00:33.879548 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.879714 kubelet[2662]: E0913 00:00:33.879698 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.880949 kubelet[2662]: E0913 00:00:33.880762 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.880949 kubelet[2662]: W0913 00:00:33.880783 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.880949 kubelet[2662]: E0913 00:00:33.880873 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.881287 kubelet[2662]: E0913 00:00:33.881249 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.881287 kubelet[2662]: W0913 00:00:33.881274 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.881491 kubelet[2662]: E0913 00:00:33.881431 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.884535 kubelet[2662]: E0913 00:00:33.884496 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.884535 kubelet[2662]: W0913 00:00:33.884523 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.884734 kubelet[2662]: E0913 00:00:33.884692 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.886583 kubelet[2662]: E0913 00:00:33.886550 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.886583 kubelet[2662]: W0913 00:00:33.886573 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.887160 kubelet[2662]: E0913 00:00:33.886998 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.887562 kubelet[2662]: E0913 00:00:33.887365 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.887562 kubelet[2662]: W0913 00:00:33.887382 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.887562 kubelet[2662]: E0913 00:00:33.887476 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.888095 kubelet[2662]: E0913 00:00:33.887863 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.888095 kubelet[2662]: W0913 00:00:33.887881 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.888095 kubelet[2662]: E0913 00:00:33.887923 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.890246 kubelet[2662]: E0913 00:00:33.888470 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.890246 kubelet[2662]: W0913 00:00:33.888862 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.891741 kubelet[2662]: E0913 00:00:33.891462 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.893016 kubelet[2662]: E0913 00:00:33.892954 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.893016 kubelet[2662]: W0913 00:00:33.892972 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.893016 kubelet[2662]: E0913 00:00:33.892996 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.894958 kubelet[2662]: E0913 00:00:33.893646 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.894958 kubelet[2662]: W0913 00:00:33.893933 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.897140 kubelet[2662]: E0913 00:00:33.895703 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.898889 kubelet[2662]: E0913 00:00:33.896999 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.900623 kubelet[2662]: W0913 00:00:33.900262 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.903556 kubelet[2662]: E0913 00:00:33.902423 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.908105 kubelet[2662]: E0913 00:00:33.906326 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.908105 kubelet[2662]: W0913 00:00:33.907702 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.909428 kubelet[2662]: E0913 00:00:33.908323 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:33.914430 kubelet[2662]: E0913 00:00:33.911301 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:33.914430 kubelet[2662]: W0913 00:00:33.913428 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:33.916614 kubelet[2662]: E0913 00:00:33.915220 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:34.316282 kubelet[2662]: E0913 00:00:34.314678 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:36.312992 kubelet[2662]: E0913 00:00:36.312883 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:38.312827 kubelet[2662]: E0913 00:00:38.312726 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:39.688417 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:49212.service - OpenSSH per-connection server daemon (10.0.0.1:49212). Sep 13 00:00:39.728394 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 49212 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:00:39.730632 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:39.735729 systemd-logind[1545]: New session 8 of user core. Sep 13 00:00:39.745397 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:00:39.889480 sshd[3531]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:39.893697 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:49212.service: Deactivated successfully. Sep 13 00:00:39.896603 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:00:39.896815 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:00:39.898433 systemd-logind[1545]: Removed session 8. Sep 13 00:00:40.312380 kubelet[2662]: E0913 00:00:40.312309 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:40.397578 containerd[1571]: time="2025-09-13T00:00:40.397492131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:40.410121 containerd[1571]: time="2025-09-13T00:00:40.410012464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:00:40.440323 containerd[1571]: time="2025-09-13T00:00:40.440248013Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:40.457445 containerd[1571]: time="2025-09-13T00:00:40.457349788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:40.457895 containerd[1571]: time="2025-09-13T00:00:40.457847643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 11.709093589s" Sep 13 00:00:40.458015 containerd[1571]: time="2025-09-13T00:00:40.457898760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:00:40.461136 containerd[1571]: time="2025-09-13T00:00:40.461084221Z" level=info msg="CreateContainer within sandbox \"6bd5c0230ea6c75cbfdb4f9126e5ca01c40156add257e4813e858decf9032876\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:00:40.519786 containerd[1571]: time="2025-09-13T00:00:40.519719883Z" level=info msg="CreateContainer within sandbox \"6bd5c0230ea6c75cbfdb4f9126e5ca01c40156add257e4813e858decf9032876\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c4bd77b4d0db8083358910ed3fa8a67c3150206bb24cdabce98b15283f5eebf2\"" Sep 13 00:00:40.520389 containerd[1571]: time="2025-09-13T00:00:40.520329649Z" level=info msg="StartContainer for \"c4bd77b4d0db8083358910ed3fa8a67c3150206bb24cdabce98b15283f5eebf2\"" Sep 13 00:00:40.592595 containerd[1571]: time="2025-09-13T00:00:40.592182270Z" level=info msg="StartContainer for \"c4bd77b4d0db8083358910ed3fa8a67c3150206bb24cdabce98b15283f5eebf2\" returns successfully" Sep 13 00:00:40.628696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4bd77b4d0db8083358910ed3fa8a67c3150206bb24cdabce98b15283f5eebf2-rootfs.mount: Deactivated successfully. Sep 13 00:00:40.688202 containerd[1571]: time="2025-09-13T00:00:40.688119771Z" level=info msg="shim disconnected" id=c4bd77b4d0db8083358910ed3fa8a67c3150206bb24cdabce98b15283f5eebf2 namespace=k8s.io Sep 13 00:00:40.688202 containerd[1571]: time="2025-09-13T00:00:40.688196566Z" level=warning msg="cleaning up after shim disconnected" id=c4bd77b4d0db8083358910ed3fa8a67c3150206bb24cdabce98b15283f5eebf2 namespace=k8s.io Sep 13 00:00:40.688202 containerd[1571]: time="2025-09-13T00:00:40.688205763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:00:40.767702 containerd[1571]: time="2025-09-13T00:00:40.767650382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:00:42.312827 kubelet[2662]: E0913 00:00:42.312721 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:44.312326 kubelet[2662]: E0913 00:00:44.312258 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:44.907385 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:59186.service - OpenSSH per-connection server daemon (10.0.0.1:59186). Sep 13 00:00:44.944576 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 59186 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:00:44.946488 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:44.951243 systemd-logind[1545]: New session 9 of user core. Sep 13 00:00:44.962598 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:00:45.087192 sshd[3645]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:45.091989 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:59186.service: Deactivated successfully. Sep 13 00:00:45.095097 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:00:45.095441 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:00:45.096796 systemd-logind[1545]: Removed session 9. Sep 13 00:00:46.313227 kubelet[2662]: E0913 00:00:46.313100 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:47.778911 containerd[1571]: time="2025-09-13T00:00:47.778838343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:47.780345 containerd[1571]: time="2025-09-13T00:00:47.780297721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:00:47.781565 containerd[1571]: time="2025-09-13T00:00:47.781512071Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:47.784126 containerd[1571]: time="2025-09-13T00:00:47.784098936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:47.785068 containerd[1571]: time="2025-09-13T00:00:47.784999287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 7.01729929s" Sep 13 00:00:47.785146 containerd[1571]: time="2025-09-13T00:00:47.785070811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:00:47.788587 containerd[1571]: time="2025-09-13T00:00:47.788534702Z" level=info msg="CreateContainer within sandbox \"6bd5c0230ea6c75cbfdb4f9126e5ca01c40156add257e4813e858decf9032876\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:00:47.805611 containerd[1571]: time="2025-09-13T00:00:47.805569716Z" level=info msg="CreateContainer within sandbox \"6bd5c0230ea6c75cbfdb4f9126e5ca01c40156add257e4813e858decf9032876\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ec68a5d6b02a54551824f3e129634306634295897c264f629359f254b53a6003\"" Sep 13 00:00:47.806156 containerd[1571]: time="2025-09-13T00:00:47.806113958Z" level=info msg="StartContainer for \"ec68a5d6b02a54551824f3e129634306634295897c264f629359f254b53a6003\"" Sep 13 00:00:47.871783 containerd[1571]: time="2025-09-13T00:00:47.871714148Z" level=info msg="StartContainer for \"ec68a5d6b02a54551824f3e129634306634295897c264f629359f254b53a6003\" returns successfully" Sep 13 00:00:48.312530 kubelet[2662]: E0913 00:00:48.312439 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:49.557307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec68a5d6b02a54551824f3e129634306634295897c264f629359f254b53a6003-rootfs.mount: Deactivated successfully. Sep 13 00:00:49.562264 containerd[1571]: time="2025-09-13T00:00:49.562182976Z" level=info msg="shim disconnected" id=ec68a5d6b02a54551824f3e129634306634295897c264f629359f254b53a6003 namespace=k8s.io Sep 13 00:00:49.562264 containerd[1571]: time="2025-09-13T00:00:49.562256093Z" level=warning msg="cleaning up after shim disconnected" id=ec68a5d6b02a54551824f3e129634306634295897c264f629359f254b53a6003 namespace=k8s.io Sep 13 00:00:49.562264 containerd[1571]: time="2025-09-13T00:00:49.562269780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:00:49.566004 kubelet[2662]: I0913 00:00:49.565954 2662 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:00:49.696153 kubelet[2662]: I0913 00:00:49.696079 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8gt\" (UniqueName: \"kubernetes.io/projected/e037d004-e75e-4824-a87e-fd02a1b5adaa-kube-api-access-8t8gt\") pod \"calico-apiserver-57d7979648-crgkm\" (UID: \"e037d004-e75e-4824-a87e-fd02a1b5adaa\") " pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" Sep 13 00:00:49.696153 kubelet[2662]: I0913 00:00:49.696136 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f317c9d-be74-45c8-9172-3b8f509591a6-config\") pod \"goldmane-7988f88666-wc4bn\" (UID: \"7f317c9d-be74-45c8-9172-3b8f509591a6\") " pod="calico-system/goldmane-7988f88666-wc4bn" Sep 13 00:00:49.696153 kubelet[2662]: I0913 00:00:49.696161 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e3bb006-8c83-4e24-90c2-984bdd355c97-config-volume\") pod \"coredns-7c65d6cfc9-98zzs\" (UID: \"4e3bb006-8c83-4e24-90c2-984bdd355c97\") " pod="kube-system/coredns-7c65d6cfc9-98zzs" Sep 13 00:00:49.696471 kubelet[2662]: I0913 00:00:49.696177 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-backend-key-pair\") pod \"whisker-5b8858cf9b-mrc6w\" (UID: \"33726d39-cb2d-42da-a848-4caa290bc7f4\") " pod="calico-system/whisker-5b8858cf9b-mrc6w" Sep 13 00:00:49.696471 kubelet[2662]: I0913 00:00:49.696196 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c5674c-deb9-482b-8272-20808de1f68c-tigera-ca-bundle\") pod \"calico-kube-controllers-9cb7ff6b6-87ccw\" (UID: \"d3c5674c-deb9-482b-8272-20808de1f68c\") " pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" Sep 13 00:00:49.696471 kubelet[2662]: I0913 00:00:49.696210 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4868fae-b2aa-4106-8bb0-b85ce9739178-config-volume\") pod \"coredns-7c65d6cfc9-7tb5v\" (UID: \"d4868fae-b2aa-4106-8bb0-b85ce9739178\") " pod="kube-system/coredns-7c65d6cfc9-7tb5v" Sep 13 00:00:49.696471 kubelet[2662]: I0913 00:00:49.696255 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e037d004-e75e-4824-a87e-fd02a1b5adaa-calico-apiserver-certs\") pod \"calico-apiserver-57d7979648-crgkm\" (UID: \"e037d004-e75e-4824-a87e-fd02a1b5adaa\") " pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" Sep 13 00:00:49.696471 kubelet[2662]: I0913 00:00:49.696317 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqrr\" (UniqueName: \"kubernetes.io/projected/d3c5674c-deb9-482b-8272-20808de1f68c-kube-api-access-brqrr\") pod \"calico-kube-controllers-9cb7ff6b6-87ccw\" (UID: \"d3c5674c-deb9-482b-8272-20808de1f68c\") " pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" Sep 13 00:00:49.696650 kubelet[2662]: I0913 00:00:49.696349 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f317c9d-be74-45c8-9172-3b8f509591a6-goldmane-ca-bundle\") pod \"goldmane-7988f88666-wc4bn\" (UID: \"7f317c9d-be74-45c8-9172-3b8f509591a6\") " pod="calico-system/goldmane-7988f88666-wc4bn" Sep 13 00:00:49.696650 kubelet[2662]: I0913 00:00:49.696366 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q84kk\" (UniqueName: \"kubernetes.io/projected/5cffd678-440b-46f8-bc05-0ed615dceeef-kube-api-access-q84kk\") pod \"calico-apiserver-57d7979648-xww86\" (UID: \"5cffd678-440b-46f8-bc05-0ed615dceeef\") " pod="calico-apiserver/calico-apiserver-57d7979648-xww86" Sep 13 00:00:49.696650 kubelet[2662]: I0913 00:00:49.696383 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-ca-bundle\") pod \"whisker-5b8858cf9b-mrc6w\" (UID: \"33726d39-cb2d-42da-a848-4caa290bc7f4\") " pod="calico-system/whisker-5b8858cf9b-mrc6w" Sep 13 00:00:49.696650 kubelet[2662]: I0913 00:00:49.696399 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5h4b\" (UniqueName: \"kubernetes.io/projected/d4868fae-b2aa-4106-8bb0-b85ce9739178-kube-api-access-q5h4b\") pod \"coredns-7c65d6cfc9-7tb5v\" (UID: \"d4868fae-b2aa-4106-8bb0-b85ce9739178\") " pod="kube-system/coredns-7c65d6cfc9-7tb5v" Sep 13 00:00:49.696650 kubelet[2662]: I0913 00:00:49.696425 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5cffd678-440b-46f8-bc05-0ed615dceeef-calico-apiserver-certs\") pod \"calico-apiserver-57d7979648-xww86\" (UID: \"5cffd678-440b-46f8-bc05-0ed615dceeef\") " pod="calico-apiserver/calico-apiserver-57d7979648-xww86" Sep 13 00:00:49.696823 kubelet[2662]: I0913 00:00:49.696443 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzn48\" (UniqueName: \"kubernetes.io/projected/4e3bb006-8c83-4e24-90c2-984bdd355c97-kube-api-access-bzn48\") pod \"coredns-7c65d6cfc9-98zzs\" (UID: \"4e3bb006-8c83-4e24-90c2-984bdd355c97\") " pod="kube-system/coredns-7c65d6cfc9-98zzs" Sep 13 00:00:49.696823 kubelet[2662]: I0913 00:00:49.696459 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/33726d39-cb2d-42da-a848-4caa290bc7f4-kube-api-access-vjgbq\") pod \"whisker-5b8858cf9b-mrc6w\" (UID: \"33726d39-cb2d-42da-a848-4caa290bc7f4\") " pod="calico-system/whisker-5b8858cf9b-mrc6w" Sep 13 00:00:49.696823 kubelet[2662]: I0913 00:00:49.696474 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cgjq\" (UniqueName: \"kubernetes.io/projected/7f317c9d-be74-45c8-9172-3b8f509591a6-kube-api-access-7cgjq\") pod \"goldmane-7988f88666-wc4bn\" (UID: \"7f317c9d-be74-45c8-9172-3b8f509591a6\") " pod="calico-system/goldmane-7988f88666-wc4bn" Sep 13 00:00:49.696823 kubelet[2662]: I0913 00:00:49.696585 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7f317c9d-be74-45c8-9172-3b8f509591a6-goldmane-key-pair\") pod \"goldmane-7988f88666-wc4bn\" (UID: \"7f317c9d-be74-45c8-9172-3b8f509591a6\") " pod="calico-system/goldmane-7988f88666-wc4bn" Sep 13 00:00:49.788129 containerd[1571]: time="2025-09-13T00:00:49.788063374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:00:49.922328 kubelet[2662]: E0913 00:00:49.922116 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:49.923012 containerd[1571]: time="2025-09-13T00:00:49.922805416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-98zzs,Uid:4e3bb006-8c83-4e24-90c2-984bdd355c97,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:49.928824 kubelet[2662]: E0913 00:00:49.928666 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:00:49.929279 containerd[1571]: time="2025-09-13T00:00:49.929220456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb7ff6b6-87ccw,Uid:d3c5674c-deb9-482b-8272-20808de1f68c,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:49.929369 containerd[1571]: time="2025-09-13T00:00:49.929264298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tb5v,Uid:d4868fae-b2aa-4106-8bb0-b85ce9739178,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:49.935063 containerd[1571]: time="2025-09-13T00:00:49.935000884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-crgkm,Uid:e037d004-e75e-4824-a87e-fd02a1b5adaa,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:00:49.935526 containerd[1571]: time="2025-09-13T00:00:49.935472539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wc4bn,Uid:7f317c9d-be74-45c8-9172-3b8f509591a6,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:49.935526 containerd[1571]: time="2025-09-13T00:00:49.935498348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8858cf9b-mrc6w,Uid:33726d39-cb2d-42da-a848-4caa290bc7f4,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:49.942601 containerd[1571]: time="2025-09-13T00:00:49.942556134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-xww86,Uid:5cffd678-440b-46f8-bc05-0ed615dceeef,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:00:50.102442 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:34560.service - OpenSSH per-connection server daemon (10.0.0.1:34560). Sep 13 00:00:50.154089 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 34560 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:00:50.156342 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:50.169206 systemd-logind[1545]: New session 10 of user core. Sep 13 00:00:50.175648 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:00:50.245690 containerd[1571]: time="2025-09-13T00:00:50.241320939Z" level=error msg="Failed to destroy network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.251100 containerd[1571]: time="2025-09-13T00:00:50.250820617Z" level=error msg="Failed to destroy network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.256121 containerd[1571]: time="2025-09-13T00:00:50.256035964Z" level=error msg="encountered an error cleaning up failed sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.256514 containerd[1571]: time="2025-09-13T00:00:50.256484636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-xww86,Uid:5cffd678-440b-46f8-bc05-0ed615dceeef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.271145 containerd[1571]: time="2025-09-13T00:00:50.269005985Z" level=error msg="Failed to destroy network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.278310 containerd[1571]: time="2025-09-13T00:00:50.278246897Z" level=error msg="encountered an error cleaning up failed sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.278551 containerd[1571]: time="2025-09-13T00:00:50.278519579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tb5v,Uid:d4868fae-b2aa-4106-8bb0-b85ce9739178,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.280077 containerd[1571]: time="2025-09-13T00:00:50.278428568Z" level=error msg="encountered an error cleaning up failed sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.280077 containerd[1571]: time="2025-09-13T00:00:50.279942609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-98zzs,Uid:4e3bb006-8c83-4e24-90c2-984bdd355c97,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.280333 containerd[1571]: time="2025-09-13T00:00:50.280296464Z" level=error msg="Failed to destroy network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.280803 containerd[1571]: time="2025-09-13T00:00:50.280765304Z" level=error msg="encountered an error cleaning up failed sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.280915 containerd[1571]: time="2025-09-13T00:00:50.280888365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8858cf9b-mrc6w,Uid:33726d39-cb2d-42da-a848-4caa290bc7f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.281303 containerd[1571]: time="2025-09-13T00:00:50.281190862Z" level=error msg="Failed to destroy network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.281666 containerd[1571]: time="2025-09-13T00:00:50.281635376Z" level=error msg="encountered an error cleaning up failed sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.283225 containerd[1571]: time="2025-09-13T00:00:50.283150330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wc4bn,Uid:7f317c9d-be74-45c8-9172-3b8f509591a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.285497 kubelet[2662]: E0913 00:00:50.285412 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.286377 kubelet[2662]: E0913 00:00:50.285757 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.286377 kubelet[2662]: E0913 00:00:50.285807 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7tb5v" Sep 13 00:00:50.286377 kubelet[2662]: E0913 00:00:50.285854 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7tb5v" Sep 13 00:00:50.286522 containerd[1571]: time="2025-09-13T00:00:50.284837255Z" level=error msg="Failed to destroy network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.286581 kubelet[2662]: E0913 00:00:50.285918 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7tb5v_kube-system(d4868fae-b2aa-4106-8bb0-b85ce9739178)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7tb5v_kube-system(d4868fae-b2aa-4106-8bb0-b85ce9739178)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7tb5v" podUID="d4868fae-b2aa-4106-8bb0-b85ce9739178" Sep 13 00:00:50.286674 containerd[1571]: time="2025-09-13T00:00:50.286623657Z" level=error msg="encountered an error cleaning up failed sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.286917 kubelet[2662]: E0913 00:00:50.286748 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-98zzs" Sep 13 00:00:50.286917 kubelet[2662]: E0913 00:00:50.286786 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-98zzs" Sep 13 00:00:50.286917 kubelet[2662]: E0913 00:00:50.286854 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-98zzs_kube-system(4e3bb006-8c83-4e24-90c2-984bdd355c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-98zzs_kube-system(4e3bb006-8c83-4e24-90c2-984bdd355c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-98zzs" podUID="4e3bb006-8c83-4e24-90c2-984bdd355c97" Sep 13 00:00:50.287110 containerd[1571]: time="2025-09-13T00:00:50.286708667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb7ff6b6-87ccw,Uid:d3c5674c-deb9-482b-8272-20808de1f68c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.287676 kubelet[2662]: E0913 00:00:50.287213 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.287676 kubelet[2662]: E0913 00:00:50.287263 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7979648-xww86" Sep 13 00:00:50.287676 kubelet[2662]: E0913 00:00:50.287306 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7979648-xww86" Sep 13 00:00:50.287936 kubelet[2662]: E0913 00:00:50.287356 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d7979648-xww86_calico-apiserver(5cffd678-440b-46f8-bc05-0ed615dceeef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d7979648-xww86_calico-apiserver(5cffd678-440b-46f8-bc05-0ed615dceeef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7979648-xww86" podUID="5cffd678-440b-46f8-bc05-0ed615dceeef" Sep 13 00:00:50.287936 kubelet[2662]: E0913 00:00:50.287400 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.287936 kubelet[2662]: E0913 00:00:50.287424 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8858cf9b-mrc6w" Sep 13 00:00:50.288474 kubelet[2662]: E0913 00:00:50.287444 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8858cf9b-mrc6w" Sep 13 00:00:50.288474 kubelet[2662]: E0913 00:00:50.287477 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b8858cf9b-mrc6w_calico-system(33726d39-cb2d-42da-a848-4caa290bc7f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b8858cf9b-mrc6w_calico-system(33726d39-cb2d-42da-a848-4caa290bc7f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8858cf9b-mrc6w" podUID="33726d39-cb2d-42da-a848-4caa290bc7f4" Sep 13 00:00:50.288474 kubelet[2662]: E0913 00:00:50.287807 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.289137 kubelet[2662]: E0913 00:00:50.287851 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-wc4bn" Sep 13 00:00:50.289137 kubelet[2662]: E0913 00:00:50.288168 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-wc4bn" Sep 13 00:00:50.289137 kubelet[2662]: E0913 00:00:50.288380 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.290333 kubelet[2662]: E0913 00:00:50.288610 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-wc4bn_calico-system(7f317c9d-be74-45c8-9172-3b8f509591a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-wc4bn_calico-system(7f317c9d-be74-45c8-9172-3b8f509591a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-wc4bn" podUID="7f317c9d-be74-45c8-9172-3b8f509591a6" Sep 13 00:00:50.290333 kubelet[2662]: E0913 00:00:50.288422 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" Sep 13 00:00:50.290333 kubelet[2662]: E0913 00:00:50.288777 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" Sep 13 00:00:50.290841 kubelet[2662]: E0913 00:00:50.290663 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9cb7ff6b6-87ccw_calico-system(d3c5674c-deb9-482b-8272-20808de1f68c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9cb7ff6b6-87ccw_calico-system(d3c5674c-deb9-482b-8272-20808de1f68c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" podUID="d3c5674c-deb9-482b-8272-20808de1f68c" Sep 13 00:00:50.293465 containerd[1571]: time="2025-09-13T00:00:50.293377412Z" level=error msg="Failed to destroy network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.294305 containerd[1571]: time="2025-09-13T00:00:50.294262614Z" level=error msg="encountered an error cleaning up failed sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.294444 containerd[1571]: time="2025-09-13T00:00:50.294420510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-crgkm,Uid:e037d004-e75e-4824-a87e-fd02a1b5adaa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.295069 kubelet[2662]: E0913 00:00:50.294981 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.295141 kubelet[2662]: E0913 00:00:50.295067 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" Sep 13 00:00:50.295141 kubelet[2662]: E0913 00:00:50.295103 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" Sep 13 00:00:50.295429 kubelet[2662]: E0913 00:00:50.295281 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d7979648-crgkm_calico-apiserver(e037d004-e75e-4824-a87e-fd02a1b5adaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d7979648-crgkm_calico-apiserver(e037d004-e75e-4824-a87e-fd02a1b5adaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" podUID="e037d004-e75e-4824-a87e-fd02a1b5adaa" Sep 13 00:00:50.320106 containerd[1571]: time="2025-09-13T00:00:50.319161571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqbfr,Uid:ba4c914c-84cd-4650-afa0-e3d23d56f99f,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:50.371673 sshd[3834]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:50.378696 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:34560.service: Deactivated successfully. Sep 13 00:00:50.385002 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:00:50.388567 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:00:50.390621 systemd-logind[1545]: Removed session 10. Sep 13 00:00:50.411533 containerd[1571]: time="2025-09-13T00:00:50.411458895Z" level=error msg="Failed to destroy network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.412138 containerd[1571]: time="2025-09-13T00:00:50.412091282Z" level=error msg="encountered an error cleaning up failed sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.412200 containerd[1571]: time="2025-09-13T00:00:50.412159079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqbfr,Uid:ba4c914c-84cd-4650-afa0-e3d23d56f99f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.412499 kubelet[2662]: E0913 00:00:50.412443 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.412686 kubelet[2662]: E0913 00:00:50.412517 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fqbfr" Sep 13 00:00:50.412686 kubelet[2662]: E0913 00:00:50.412546 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fqbfr" Sep 13 00:00:50.412686 kubelet[2662]: E0913 00:00:50.412642 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fqbfr_calico-system(ba4c914c-84cd-4650-afa0-e3d23d56f99f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fqbfr_calico-system(ba4c914c-84cd-4650-afa0-e3d23d56f99f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:50.791325 kubelet[2662]: I0913 00:00:50.790375 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:00:50.792570 kubelet[2662]: I0913 00:00:50.792256 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:00:50.794824 kubelet[2662]: I0913 00:00:50.794329 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:00:50.798863 kubelet[2662]: I0913 00:00:50.798368 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:00:50.823748 containerd[1571]: time="2025-09-13T00:00:50.823694103Z" level=info msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\"" Sep 13 00:00:50.824366 containerd[1571]: time="2025-09-13T00:00:50.823701607Z" level=info msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\"" Sep 13 00:00:50.824492 containerd[1571]: time="2025-09-13T00:00:50.824429673Z" level=info msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\"" Sep 13 00:00:50.824684 containerd[1571]: time="2025-09-13T00:00:50.824527898Z" level=info msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\"" Sep 13 00:00:50.825533 containerd[1571]: time="2025-09-13T00:00:50.825489172Z" level=info msg="Ensure that sandbox 990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f in task-service has been cleanup successfully" Sep 13 00:00:50.825604 containerd[1571]: time="2025-09-13T00:00:50.825501555Z" level=info msg="Ensure that sandbox b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c in task-service has been cleanup successfully" Sep 13 00:00:50.826031 containerd[1571]: time="2025-09-13T00:00:50.825867983Z" level=info msg="Ensure that sandbox 30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06 in task-service has been cleanup successfully" Sep 13 00:00:50.826031 containerd[1571]: time="2025-09-13T00:00:50.825502597Z" level=info msg="Ensure that sandbox b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db in task-service has been cleanup successfully" Sep 13 00:00:50.836631 kubelet[2662]: I0913 00:00:50.836535 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:00:50.839201 containerd[1571]: time="2025-09-13T00:00:50.838753275Z" level=info msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\"" Sep 13 00:00:50.839201 containerd[1571]: time="2025-09-13T00:00:50.838936158Z" level=info msg="Ensure that sandbox 94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63 in task-service has been cleanup successfully" Sep 13 00:00:50.842509 kubelet[2662]: I0913 00:00:50.842482 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:00:50.845772 containerd[1571]: time="2025-09-13T00:00:50.844227939Z" level=info msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\"" Sep 13 00:00:50.846872 containerd[1571]: time="2025-09-13T00:00:50.846841834Z" level=info msg="Ensure that sandbox 73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437 in task-service has been cleanup successfully" Sep 13 00:00:50.847827 kubelet[2662]: I0913 00:00:50.847797 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:00:50.848848 containerd[1571]: time="2025-09-13T00:00:50.848811491Z" level=info msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\"" Sep 13 00:00:50.849055 containerd[1571]: time="2025-09-13T00:00:50.849015563Z" level=info msg="Ensure that sandbox 936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13 in task-service has been cleanup successfully" Sep 13 00:00:50.853508 kubelet[2662]: I0913 00:00:50.853456 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:00:50.855001 containerd[1571]: time="2025-09-13T00:00:50.854961031Z" level=info msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\"" Sep 13 00:00:50.855243 containerd[1571]: time="2025-09-13T00:00:50.855220829Z" level=info msg="Ensure that sandbox 378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b in task-service has been cleanup successfully" Sep 13 00:00:50.881225 containerd[1571]: time="2025-09-13T00:00:50.881163375Z" level=error msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" failed" error="failed to destroy network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.881716 kubelet[2662]: E0913 00:00:50.881672 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:00:50.882207 kubelet[2662]: E0913 00:00:50.881922 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db"} Sep 13 00:00:50.882207 kubelet[2662]: E0913 00:00:50.882119 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.882207 kubelet[2662]: E0913 00:00:50.882152 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:00:50.900143 containerd[1571]: time="2025-09-13T00:00:50.899633748Z" level=error msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" failed" error="failed to destroy network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.900143 containerd[1571]: time="2025-09-13T00:00:50.900131152Z" level=error msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" failed" error="failed to destroy network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.900461 kubelet[2662]: E0913 00:00:50.899914 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:00:50.900461 kubelet[2662]: E0913 00:00:50.899989 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13"} Sep 13 00:00:50.900461 kubelet[2662]: E0913 00:00:50.900067 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e3bb006-8c83-4e24-90c2-984bdd355c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.900461 kubelet[2662]: E0913 00:00:50.900098 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e3bb006-8c83-4e24-90c2-984bdd355c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-98zzs" podUID="4e3bb006-8c83-4e24-90c2-984bdd355c97" Sep 13 00:00:50.903347 kubelet[2662]: E0913 00:00:50.902954 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:00:50.903347 kubelet[2662]: E0913 00:00:50.903023 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c"} Sep 13 00:00:50.903347 kubelet[2662]: E0913 00:00:50.903250 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5cffd678-440b-46f8-bc05-0ed615dceeef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.903347 kubelet[2662]: E0913 00:00:50.903308 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5cffd678-440b-46f8-bc05-0ed615dceeef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7979648-xww86" podUID="5cffd678-440b-46f8-bc05-0ed615dceeef" Sep 13 00:00:50.908734 containerd[1571]: time="2025-09-13T00:00:50.908675878Z" level=error msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" failed" error="failed to destroy network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.909194 kubelet[2662]: E0913 00:00:50.909158 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:00:50.909372 kubelet[2662]: E0913 00:00:50.909329 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b"} Sep 13 00:00:50.909534 kubelet[2662]: E0913 00:00:50.909499 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3c5674c-deb9-482b-8272-20808de1f68c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.909686 kubelet[2662]: E0913 00:00:50.909663 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3c5674c-deb9-482b-8272-20808de1f68c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" podUID="d3c5674c-deb9-482b-8272-20808de1f68c" Sep 13 00:00:50.911679 containerd[1571]: time="2025-09-13T00:00:50.911629280Z" level=error msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" failed" error="failed to destroy network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.912115 kubelet[2662]: E0913 00:00:50.911868 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:00:50.912115 kubelet[2662]: E0913 00:00:50.911909 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06"} Sep 13 00:00:50.912115 kubelet[2662]: E0913 00:00:50.911956 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f317c9d-be74-45c8-9172-3b8f509591a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.912115 kubelet[2662]: E0913 00:00:50.912016 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f317c9d-be74-45c8-9172-3b8f509591a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-wc4bn" podUID="7f317c9d-be74-45c8-9172-3b8f509591a6" Sep 13 00:00:50.918895 containerd[1571]: time="2025-09-13T00:00:50.918832289Z" level=error msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" failed" error="failed to destroy network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.919778 kubelet[2662]: E0913 00:00:50.919738 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:00:50.919915 kubelet[2662]: E0913 00:00:50.919889 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63"} Sep 13 00:00:50.920033 kubelet[2662]: E0913 00:00:50.920017 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e037d004-e75e-4824-a87e-fd02a1b5adaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.920251 kubelet[2662]: E0913 00:00:50.920224 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e037d004-e75e-4824-a87e-fd02a1b5adaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" podUID="e037d004-e75e-4824-a87e-fd02a1b5adaa" Sep 13 00:00:50.921583 containerd[1571]: time="2025-09-13T00:00:50.921544328Z" level=error msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" failed" error="failed to destroy network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.921844 kubelet[2662]: E0913 00:00:50.921803 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:00:50.921908 kubelet[2662]: E0913 00:00:50.921847 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f"} Sep 13 00:00:50.921908 kubelet[2662]: E0913 00:00:50.921878 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33726d39-cb2d-42da-a848-4caa290bc7f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.922013 kubelet[2662]: E0913 00:00:50.921904 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33726d39-cb2d-42da-a848-4caa290bc7f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8858cf9b-mrc6w" podUID="33726d39-cb2d-42da-a848-4caa290bc7f4" Sep 13 00:00:50.924724 containerd[1571]: time="2025-09-13T00:00:50.924678099Z" level=error msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" failed" error="failed to destroy network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:50.924872 kubelet[2662]: E0913 00:00:50.924842 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:00:50.924953 kubelet[2662]: E0913 00:00:50.924880 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437"} Sep 13 00:00:50.924953 kubelet[2662]: E0913 00:00:50.924910 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4868fae-b2aa-4106-8bb0-b85ce9739178\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:50.924953 kubelet[2662]: E0913 00:00:50.924931 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4868fae-b2aa-4106-8bb0-b85ce9739178\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7tb5v" podUID="d4868fae-b2aa-4106-8bb0-b85ce9739178" Sep 13 00:00:55.382332 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:34568.service - OpenSSH per-connection server daemon (10.0.0.1:34568). Sep 13 00:00:55.421498 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 34568 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:00:55.423665 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:55.427841 systemd-logind[1545]: New session 11 of user core. Sep 13 00:00:55.440347 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:00:55.546934 sshd[4156]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:55.551297 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:34568.service: Deactivated successfully. Sep 13 00:00:55.554101 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:00:55.554371 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:00:55.555333 systemd-logind[1545]: Removed session 11. Sep 13 00:01:00.560542 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:57332.service - OpenSSH per-connection server daemon (10.0.0.1:57332). Sep 13 00:01:00.605303 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 57332 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:00.607330 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:00.614327 systemd-logind[1545]: New session 12 of user core. Sep 13 00:01:00.621536 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:01:00.805379 sshd[4176]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:00.810689 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:57332.service: Deactivated successfully. Sep 13 00:01:00.816555 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:01:00.817557 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:01:00.818724 systemd-logind[1545]: Removed session 12. Sep 13 00:01:01.313640 containerd[1571]: time="2025-09-13T00:01:01.313263352Z" level=info msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\"" Sep 13 00:01:01.417342 containerd[1571]: time="2025-09-13T00:01:01.417275587Z" level=error msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" failed" error="failed to destroy network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:01.418265 kubelet[2662]: E0913 00:01:01.418220 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:01:01.418736 kubelet[2662]: E0913 00:01:01.418278 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63"} Sep 13 00:01:01.418736 kubelet[2662]: E0913 00:01:01.418319 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e037d004-e75e-4824-a87e-fd02a1b5adaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:01.418736 kubelet[2662]: E0913 00:01:01.418346 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e037d004-e75e-4824-a87e-fd02a1b5adaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" podUID="e037d004-e75e-4824-a87e-fd02a1b5adaa" Sep 13 00:01:02.404788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697232518.mount: Deactivated successfully. Sep 13 00:01:03.313451 containerd[1571]: time="2025-09-13T00:01:03.313379289Z" level=info msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\"" Sep 13 00:01:03.314194 containerd[1571]: time="2025-09-13T00:01:03.313510803Z" level=info msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\"" Sep 13 00:01:03.328077 containerd[1571]: time="2025-09-13T00:01:03.327737611Z" level=info msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\"" Sep 13 00:01:03.358190 containerd[1571]: time="2025-09-13T00:01:03.358111711Z" level=error msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" failed" error="failed to destroy network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:03.358553 kubelet[2662]: E0913 00:01:03.358475 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:01:03.359103 kubelet[2662]: E0913 00:01:03.358572 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c"} Sep 13 00:01:03.359103 kubelet[2662]: E0913 00:01:03.358627 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5cffd678-440b-46f8-bc05-0ed615dceeef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:03.359103 kubelet[2662]: E0913 00:01:03.358659 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5cffd678-440b-46f8-bc05-0ed615dceeef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7979648-xww86" podUID="5cffd678-440b-46f8-bc05-0ed615dceeef" Sep 13 00:01:04.313305 containerd[1571]: time="2025-09-13T00:01:04.313247133Z" level=info msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\"" Sep 13 00:01:04.314201 containerd[1571]: time="2025-09-13T00:01:04.313571400Z" level=info msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\"" Sep 13 00:01:04.330643 containerd[1571]: time="2025-09-13T00:01:04.330236998Z" level=info msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\"" Sep 13 00:01:04.373027 containerd[1571]: time="2025-09-13T00:01:04.372958191Z" level=error msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" failed" error="failed to destroy network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:04.373413 kubelet[2662]: E0913 00:01:04.373363 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:01:04.373805 kubelet[2662]: E0913 00:01:04.373426 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db"} Sep 13 00:01:04.373805 kubelet[2662]: E0913 00:01:04.373461 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:04.373805 kubelet[2662]: E0913 00:01:04.373490 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba4c914c-84cd-4650-afa0-e3d23d56f99f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fqbfr" podUID="ba4c914c-84cd-4650-afa0-e3d23d56f99f" Sep 13 00:01:04.374021 containerd[1571]: time="2025-09-13T00:01:04.373705406Z" level=error msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" failed" error="failed to destroy network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:04.374072 kubelet[2662]: E0913 00:01:04.373901 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:01:04.374072 kubelet[2662]: E0913 00:01:04.373927 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b"} Sep 13 00:01:04.374072 kubelet[2662]: E0913 00:01:04.373948 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3c5674c-deb9-482b-8272-20808de1f68c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:04.374072 kubelet[2662]: E0913 00:01:04.373971 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3c5674c-deb9-482b-8272-20808de1f68c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" podUID="d3c5674c-deb9-482b-8272-20808de1f68c" Sep 13 00:01:05.817484 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:57342.service - OpenSSH per-connection server daemon (10.0.0.1:57342). Sep 13 00:01:06.363729 containerd[1571]: time="2025-09-13T00:01:06.313316612Z" level=info msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\"" Sep 13 00:01:06.369758 containerd[1571]: time="2025-09-13T00:01:06.368805450Z" level=error msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" failed" error="failed to destroy network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:06.369889 kubelet[2662]: E0913 00:01:06.369181 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:01:06.369889 kubelet[2662]: E0913 00:01:06.369273 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06"} Sep 13 00:01:06.369889 kubelet[2662]: E0913 00:01:06.369334 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f317c9d-be74-45c8-9172-3b8f509591a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:06.369889 kubelet[2662]: E0913 00:01:06.369360 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f317c9d-be74-45c8-9172-3b8f509591a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-wc4bn" podUID="7f317c9d-be74-45c8-9172-3b8f509591a6" Sep 13 00:01:06.370922 containerd[1571]: time="2025-09-13T00:01:06.370777084Z" level=error msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" failed" error="failed to destroy network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:06.371011 kubelet[2662]: E0913 00:01:06.370986 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:01:06.371080 kubelet[2662]: E0913 00:01:06.371016 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437"} Sep 13 00:01:06.371080 kubelet[2662]: E0913 00:01:06.371055 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4868fae-b2aa-4106-8bb0-b85ce9739178\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:06.371080 kubelet[2662]: E0913 00:01:06.371074 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4868fae-b2aa-4106-8bb0-b85ce9739178\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7tb5v" podUID="d4868fae-b2aa-4106-8bb0-b85ce9739178" Sep 13 00:01:06.373084 containerd[1571]: time="2025-09-13T00:01:06.373007518Z" level=error msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" failed" error="failed to destroy network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:06.373650 kubelet[2662]: E0913 00:01:06.373582 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:01:06.373836 kubelet[2662]: E0913 00:01:06.373803 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f"} Sep 13 00:01:06.373973 kubelet[2662]: E0913 00:01:06.373936 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33726d39-cb2d-42da-a848-4caa290bc7f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:06.374155 kubelet[2662]: E0913 00:01:06.374124 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33726d39-cb2d-42da-a848-4caa290bc7f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8858cf9b-mrc6w" podUID="33726d39-cb2d-42da-a848-4caa290bc7f4" Sep 13 00:01:06.378571 containerd[1571]: time="2025-09-13T00:01:06.378509604Z" level=error msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" failed" error="failed to destroy network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:06.378827 kubelet[2662]: E0913 00:01:06.378778 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:01:06.378881 kubelet[2662]: E0913 00:01:06.378836 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13"} Sep 13 00:01:06.378906 kubelet[2662]: E0913 00:01:06.378874 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e3bb006-8c83-4e24-90c2-984bdd355c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:06.378968 kubelet[2662]: E0913 00:01:06.378905 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e3bb006-8c83-4e24-90c2-984bdd355c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-98zzs" podUID="4e3bb006-8c83-4e24-90c2-984bdd355c97" Sep 13 00:01:06.395506 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 57342 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:06.397610 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:06.402913 systemd-logind[1545]: New session 13 of user core. Sep 13 00:01:06.420006 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:01:06.600742 containerd[1571]: time="2025-09-13T00:01:06.600635014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:06.601769 containerd[1571]: time="2025-09-13T00:01:06.601629072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:01:06.603290 containerd[1571]: time="2025-09-13T00:01:06.603142873Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:06.603643 sshd[4318]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:06.607230 containerd[1571]: time="2025-09-13T00:01:06.606653757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:06.607305 containerd[1571]: time="2025-09-13T00:01:06.607265308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 16.819155436s" Sep 13 00:01:06.607346 containerd[1571]: time="2025-09-13T00:01:06.607306437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:01:06.619261 containerd[1571]: time="2025-09-13T00:01:06.619023295Z" level=info msg="CreateContainer within sandbox \"6bd5c0230ea6c75cbfdb4f9126e5ca01c40156add257e4813e858decf9032876\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:01:06.619523 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:57354.service - OpenSSH per-connection server daemon (10.0.0.1:57354). Sep 13 00:01:06.630059 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:57342.service: Deactivated successfully. Sep 13 00:01:06.633994 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:01:06.637578 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:01:06.650157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502592000.mount: Deactivated successfully. Sep 13 00:01:06.651304 systemd-logind[1545]: Removed session 13. Sep 13 00:01:06.651831 containerd[1571]: time="2025-09-13T00:01:06.651789699Z" level=info msg="CreateContainer within sandbox \"6bd5c0230ea6c75cbfdb4f9126e5ca01c40156add257e4813e858decf9032876\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"be1a6533f93b59a7b5ab63b53d74261ee503fc8e19ebaf7e16458809d804ba08\"" Sep 13 00:01:06.652836 containerd[1571]: time="2025-09-13T00:01:06.652791302Z" level=info msg="StartContainer for \"be1a6533f93b59a7b5ab63b53d74261ee503fc8e19ebaf7e16458809d804ba08\"" Sep 13 00:01:06.666696 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 57354 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:06.669722 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:06.676421 systemd-logind[1545]: New session 14 of user core. Sep 13 00:01:06.688667 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:01:06.748375 containerd[1571]: time="2025-09-13T00:01:06.748307434Z" level=info msg="StartContainer for \"be1a6533f93b59a7b5ab63b53d74261ee503fc8e19ebaf7e16458809d804ba08\" returns successfully" Sep 13 00:01:07.017917 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:01:07.020434 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:01:07.020475 kubelet[2662]: I0913 00:01:07.020202 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x6l8h" podStartSLOduration=1.83072305 podStartE2EDuration="45.020180947s" podCreationTimestamp="2025-09-13 00:00:22 +0000 UTC" firstStartedPulling="2025-09-13 00:00:23.418915569 +0000 UTC m=+21.212764015" lastFinishedPulling="2025-09-13 00:01:06.608373466 +0000 UTC m=+64.402221912" observedRunningTime="2025-09-13 00:01:07.019821133 +0000 UTC m=+64.813669579" watchObservedRunningTime="2025-09-13 00:01:07.020180947 +0000 UTC m=+64.814029403" Sep 13 00:01:07.051682 sshd[4356]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:07.057543 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:57370.service - OpenSSH per-connection server daemon (10.0.0.1:57370). Sep 13 00:01:07.070330 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:01:07.073204 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:57354.service: Deactivated successfully. Sep 13 00:01:07.084315 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:01:07.094034 systemd-logind[1545]: Removed session 14. Sep 13 00:01:07.131905 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 57370 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:07.135096 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:07.140828 systemd-logind[1545]: New session 15 of user core. Sep 13 00:01:07.146332 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:01:07.341142 sshd[4420]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:07.347033 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:57370.service: Deactivated successfully. Sep 13 00:01:07.351508 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:01:07.352310 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:01:07.353461 systemd-logind[1545]: Removed session 15. Sep 13 00:01:07.687027 containerd[1571]: time="2025-09-13T00:01:07.686847390Z" level=info msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\"" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.797 [INFO][4481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.797 [INFO][4481] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" iface="eth0" netns="/var/run/netns/cni-ba50a002-f5f5-b628-d70a-e3a19b5994d0" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.798 [INFO][4481] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" iface="eth0" netns="/var/run/netns/cni-ba50a002-f5f5-b628-d70a-e3a19b5994d0" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.798 [INFO][4481] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" iface="eth0" netns="/var/run/netns/cni-ba50a002-f5f5-b628-d70a-e3a19b5994d0" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.798 [INFO][4481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.798 [INFO][4481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.980 [INFO][4490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.981 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.982 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.998 [WARNING][4490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:07.998 [INFO][4490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:08.000 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:08.007770 containerd[1571]: 2025-09-13 00:01:08.004 [INFO][4481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:01:08.011523 containerd[1571]: time="2025-09-13T00:01:08.007934029Z" level=info msg="TearDown network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" successfully" Sep 13 00:01:08.011523 containerd[1571]: time="2025-09-13T00:01:08.007957203Z" level=info msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" returns successfully" Sep 13 00:01:08.012113 systemd[1]: run-netns-cni\x2dba50a002\x2df5f5\x2db628\x2dd70a\x2de3a19b5994d0.mount: Deactivated successfully. Sep 13 00:01:08.125676 kubelet[2662]: I0913 00:01:08.125598 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-backend-key-pair\") pod \"33726d39-cb2d-42da-a848-4caa290bc7f4\" (UID: \"33726d39-cb2d-42da-a848-4caa290bc7f4\") " Sep 13 00:01:08.125676 kubelet[2662]: I0913 00:01:08.125662 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-ca-bundle\") pod \"33726d39-cb2d-42da-a848-4caa290bc7f4\" (UID: \"33726d39-cb2d-42da-a848-4caa290bc7f4\") " Sep 13 00:01:08.125676 kubelet[2662]: I0913 00:01:08.125690 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/33726d39-cb2d-42da-a848-4caa290bc7f4-kube-api-access-vjgbq\") pod \"33726d39-cb2d-42da-a848-4caa290bc7f4\" (UID: \"33726d39-cb2d-42da-a848-4caa290bc7f4\") " Sep 13 00:01:08.126600 kubelet[2662]: I0913 00:01:08.126535 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "33726d39-cb2d-42da-a848-4caa290bc7f4" (UID: "33726d39-cb2d-42da-a848-4caa290bc7f4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:01:08.129710 kubelet[2662]: I0913 00:01:08.129674 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "33726d39-cb2d-42da-a848-4caa290bc7f4" (UID: "33726d39-cb2d-42da-a848-4caa290bc7f4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:01:08.129769 kubelet[2662]: I0913 00:01:08.129674 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33726d39-cb2d-42da-a848-4caa290bc7f4-kube-api-access-vjgbq" (OuterVolumeSpecName: "kube-api-access-vjgbq") pod "33726d39-cb2d-42da-a848-4caa290bc7f4" (UID: "33726d39-cb2d-42da-a848-4caa290bc7f4"). InnerVolumeSpecName "kube-api-access-vjgbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:01:08.132382 systemd[1]: var-lib-kubelet-pods-33726d39\x2dcb2d\x2d42da\x2da848\x2d4caa290bc7f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvjgbq.mount: Deactivated successfully. Sep 13 00:01:08.132624 systemd[1]: var-lib-kubelet-pods-33726d39\x2dcb2d\x2d42da\x2da848\x2d4caa290bc7f4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:01:08.226364 kubelet[2662]: I0913 00:01:08.226298 2662 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:01:08.226364 kubelet[2662]: I0913 00:01:08.226343 2662 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33726d39-cb2d-42da-a848-4caa290bc7f4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:01:08.226364 kubelet[2662]: I0913 00:01:08.226351 2662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/33726d39-cb2d-42da-a848-4caa290bc7f4-kube-api-access-vjgbq\") on node \"localhost\" DevicePath \"\"" Sep 13 00:01:09.259081 kernel: bpftool[4668]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:01:09.536797 systemd-networkd[1247]: vxlan.calico: Link UP Sep 13 00:01:09.536808 systemd-networkd[1247]: vxlan.calico: Gained carrier Sep 13 00:01:10.316219 kubelet[2662]: I0913 00:01:10.316154 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33726d39-cb2d-42da-a848-4caa290bc7f4" path="/var/lib/kubelet/pods/33726d39-cb2d-42da-a848-4caa290bc7f4/volumes" Sep 13 00:01:11.534316 systemd-networkd[1247]: vxlan.calico: Gained IPv6LL Sep 13 00:01:12.353309 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:55194.service - OpenSSH per-connection server daemon (10.0.0.1:55194). Sep 13 00:01:12.398306 sshd[4743]: Accepted publickey for core from 10.0.0.1 port 55194 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:12.400258 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:12.404907 systemd-logind[1545]: New session 16 of user core. Sep 13 00:01:12.414336 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:01:12.572472 sshd[4743]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:12.577882 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:55194.service: Deactivated successfully. Sep 13 00:01:12.580943 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:01:12.580960 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:01:12.583757 systemd-logind[1545]: Removed session 16. Sep 13 00:01:14.313207 containerd[1571]: time="2025-09-13T00:01:14.312864120Z" level=info msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\"" Sep 13 00:01:14.313207 containerd[1571]: time="2025-09-13T00:01:14.313165078Z" level=info msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\"" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.580 [INFO][4804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.580 [INFO][4804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" iface="eth0" netns="/var/run/netns/cni-55384663-40da-57ff-b362-5824a583781e" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.581 [INFO][4804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" iface="eth0" netns="/var/run/netns/cni-55384663-40da-57ff-b362-5824a583781e" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.581 [INFO][4804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" iface="eth0" netns="/var/run/netns/cni-55384663-40da-57ff-b362-5824a583781e" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.581 [INFO][4804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.581 [INFO][4804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.665 [INFO][4820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.665 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.665 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.674 [WARNING][4820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.675 [INFO][4820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.676 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:14.682501 containerd[1571]: 2025-09-13 00:01:14.679 [INFO][4804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:01:14.685264 containerd[1571]: time="2025-09-13T00:01:14.685186467Z" level=info msg="TearDown network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" successfully" Sep 13 00:01:14.685264 containerd[1571]: time="2025-09-13T00:01:14.685232946Z" level=info msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" returns successfully" Sep 13 00:01:14.685380 systemd[1]: run-netns-cni\x2d55384663\x2d40da\x2d57ff\x2db362\x2d5824a583781e.mount: Deactivated successfully. Sep 13 00:01:14.686251 containerd[1571]: time="2025-09-13T00:01:14.686203149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-crgkm,Uid:e037d004-e75e-4824-a87e-fd02a1b5adaa,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.581 [INFO][4805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.581 [INFO][4805] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" iface="eth0" netns="/var/run/netns/cni-f5440ad2-7573-5bb1-1b1e-fb95444f7590" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.581 [INFO][4805] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" iface="eth0" netns="/var/run/netns/cni-f5440ad2-7573-5bb1-1b1e-fb95444f7590" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.582 [INFO][4805] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" iface="eth0" netns="/var/run/netns/cni-f5440ad2-7573-5bb1-1b1e-fb95444f7590" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.582 [INFO][4805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.582 [INFO][4805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.670 [INFO][4822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.670 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.676 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.683 [WARNING][4822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.683 [INFO][4822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.685 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:14.691211 containerd[1571]: 2025-09-13 00:01:14.688 [INFO][4805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:01:14.691583 containerd[1571]: time="2025-09-13T00:01:14.691377798Z" level=info msg="TearDown network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" successfully" Sep 13 00:01:14.691583 containerd[1571]: time="2025-09-13T00:01:14.691400332Z" level=info msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" returns successfully" Sep 13 00:01:14.692132 containerd[1571]: time="2025-09-13T00:01:14.692091849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-xww86,Uid:5cffd678-440b-46f8-bc05-0ed615dceeef,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:01:14.694414 systemd[1]: run-netns-cni\x2df5440ad2\x2d7573\x2d5bb1\x2d1b1e\x2dfb95444f7590.mount: Deactivated successfully. Sep 13 00:01:15.178759 systemd-networkd[1247]: cali2917110676a: Link UP Sep 13 00:01:15.180258 systemd-networkd[1247]: cali2917110676a: Gained carrier Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.070 [INFO][4845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0 calico-apiserver-57d7979648- calico-apiserver e037d004-e75e-4824-a87e-fd02a1b5adaa 1150 0 2025-09-13 00:00:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d7979648 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d7979648-crgkm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2917110676a [] [] }} ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.070 [INFO][4845] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.099 [INFO][4874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" HandleID="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.099 [INFO][4874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" HandleID="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a33f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57d7979648-crgkm", "timestamp":"2025-09-13 00:01:15.099457183 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.099 [INFO][4874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.099 [INFO][4874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.099 [INFO][4874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.120 [INFO][4874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.140 [INFO][4874] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.146 [INFO][4874] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.148 [INFO][4874] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.150 [INFO][4874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.150 [INFO][4874] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.151 [INFO][4874] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519 Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.161 [INFO][4874] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.171 [INFO][4874] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.172 [INFO][4874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" host="localhost" Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.172 [INFO][4874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:15.202220 containerd[1571]: 2025-09-13 00:01:15.172 [INFO][4874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" HandleID="k8s-pod-network.f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:15.203115 containerd[1571]: 2025-09-13 00:01:15.174 [INFO][4845] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"e037d004-e75e-4824-a87e-fd02a1b5adaa", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d7979648-crgkm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2917110676a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:15.203115 containerd[1571]: 2025-09-13 00:01:15.175 [INFO][4845] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:15.203115 containerd[1571]: 2025-09-13 00:01:15.175 [INFO][4845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2917110676a ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:15.203115 containerd[1571]: 2025-09-13 00:01:15.179 [INFO][4845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:15.203115 containerd[1571]: 2025-09-13 00:01:15.182 [INFO][4845] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"e037d004-e75e-4824-a87e-fd02a1b5adaa", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519", Pod:"calico-apiserver-57d7979648-crgkm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2917110676a", MAC:"a6:74:61:7b:12:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:15.203115 containerd[1571]: 2025-09-13 00:01:15.198 [INFO][4845] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-crgkm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:01:15.240876 containerd[1571]: time="2025-09-13T00:01:15.240728136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:15.240876 containerd[1571]: time="2025-09-13T00:01:15.240796095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:15.240876 containerd[1571]: time="2025-09-13T00:01:15.240810413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:15.241213 containerd[1571]: time="2025-09-13T00:01:15.240925674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:15.277746 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:01:15.290538 systemd-networkd[1247]: cali273ab2ab71e: Link UP Sep 13 00:01:15.294357 systemd-networkd[1247]: cali273ab2ab71e: Gained carrier Sep 13 00:01:15.313198 kubelet[2662]: E0913 00:01:15.313108 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:15.318462 containerd[1571]: time="2025-09-13T00:01:15.318417335Z" level=info msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\"" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.077 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d7979648--xww86-eth0 calico-apiserver-57d7979648- calico-apiserver 5cffd678-440b-46f8-bc05-0ed615dceeef 1149 0 2025-09-13 00:00:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d7979648 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d7979648-xww86 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali273ab2ab71e [] [] }} ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.077 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.139 [INFO][4882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" HandleID="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.139 [INFO][4882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" HandleID="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57d7979648-xww86", "timestamp":"2025-09-13 00:01:15.139651531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.140 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.172 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.172 [INFO][4882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.219 [INFO][4882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.228 [INFO][4882] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.250 [INFO][4882] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.254 [INFO][4882] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.257 [INFO][4882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.257 [INFO][4882] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.259 [INFO][4882] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.265 [INFO][4882] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.278 [INFO][4882] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.279 [INFO][4882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" host="localhost" Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.279 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:15.324837 containerd[1571]: 2025-09-13 00:01:15.279 [INFO][4882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" HandleID="k8s-pod-network.075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:15.325561 containerd[1571]: 2025-09-13 00:01:15.284 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--xww86-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"5cffd678-440b-46f8-bc05-0ed615dceeef", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d7979648-xww86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali273ab2ab71e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:15.325561 containerd[1571]: 2025-09-13 00:01:15.284 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:15.325561 containerd[1571]: 2025-09-13 00:01:15.284 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali273ab2ab71e ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:15.325561 containerd[1571]: 2025-09-13 00:01:15.294 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:15.325561 containerd[1571]: 2025-09-13 00:01:15.295 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--xww86-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"5cffd678-440b-46f8-bc05-0ed615dceeef", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc", Pod:"calico-apiserver-57d7979648-xww86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali273ab2ab71e", MAC:"36:98:02:f8:8d:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:15.325561 containerd[1571]: 2025-09-13 00:01:15.319 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc" Namespace="calico-apiserver" Pod="calico-apiserver-57d7979648-xww86" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:01:15.329178 containerd[1571]: time="2025-09-13T00:01:15.328735264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-crgkm,Uid:e037d004-e75e-4824-a87e-fd02a1b5adaa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519\"" Sep 13 00:01:15.331010 containerd[1571]: time="2025-09-13T00:01:15.330791648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:01:15.356935 containerd[1571]: time="2025-09-13T00:01:15.356792334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:15.356935 containerd[1571]: time="2025-09-13T00:01:15.356866997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:15.356935 containerd[1571]: time="2025-09-13T00:01:15.356880803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:15.357205 containerd[1571]: time="2025-09-13T00:01:15.356993920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:15.393335 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.378 [INFO][4964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.379 [INFO][4964] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" iface="eth0" netns="/var/run/netns/cni-f73e2db3-d882-ffc8-ae4e-a52ba86417fa" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.379 [INFO][4964] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" iface="eth0" netns="/var/run/netns/cni-f73e2db3-d882-ffc8-ae4e-a52ba86417fa" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.379 [INFO][4964] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" iface="eth0" netns="/var/run/netns/cni-f73e2db3-d882-ffc8-ae4e-a52ba86417fa" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.379 [INFO][4964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.379 [INFO][4964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.404 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.404 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.404 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.411 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.411 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.419 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:15.426007 containerd[1571]: 2025-09-13 00:01:15.422 [INFO][4964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:01:15.426721 containerd[1571]: time="2025-09-13T00:01:15.426481440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7979648-xww86,Uid:5cffd678-440b-46f8-bc05-0ed615dceeef,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc\"" Sep 13 00:01:15.426889 containerd[1571]: time="2025-09-13T00:01:15.426811282Z" level=info msg="TearDown network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" successfully" Sep 13 00:01:15.426889 containerd[1571]: time="2025-09-13T00:01:15.426839277Z" level=info msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" returns successfully" Sep 13 00:01:15.427840 containerd[1571]: time="2025-09-13T00:01:15.427810740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqbfr,Uid:ba4c914c-84cd-4650-afa0-e3d23d56f99f,Namespace:calico-system,Attempt:1,}" Sep 13 00:01:15.558285 systemd-networkd[1247]: califae63bc47b0: Link UP Sep 13 00:01:15.560553 systemd-networkd[1247]: califae63bc47b0: Gained carrier Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.482 [INFO][5023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fqbfr-eth0 csi-node-driver- calico-system ba4c914c-84cd-4650-afa0-e3d23d56f99f 1163 0 2025-09-13 00:00:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fqbfr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califae63bc47b0 [] [] }} ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.482 [INFO][5023] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.513 [INFO][5036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" HandleID="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.513 [INFO][5036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" HandleID="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135690), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fqbfr", "timestamp":"2025-09-13 00:01:15.513311841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.513 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.513 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.513 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.522 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.528 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.532 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.534 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.537 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.537 [INFO][5036] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.538 [INFO][5036] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.542 [INFO][5036] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.551 [INFO][5036] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.551 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" host="localhost" Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.551 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:15.575641 containerd[1571]: 2025-09-13 00:01:15.551 [INFO][5036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" HandleID="k8s-pod-network.4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.576446 containerd[1571]: 2025-09-13 00:01:15.554 [INFO][5023] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fqbfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba4c914c-84cd-4650-afa0-e3d23d56f99f", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fqbfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califae63bc47b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:15.576446 containerd[1571]: 2025-09-13 00:01:15.554 [INFO][5023] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.576446 containerd[1571]: 2025-09-13 00:01:15.554 [INFO][5023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califae63bc47b0 ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.576446 containerd[1571]: 2025-09-13 00:01:15.559 [INFO][5023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.576446 containerd[1571]: 2025-09-13 00:01:15.560 [INFO][5023] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fqbfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba4c914c-84cd-4650-afa0-e3d23d56f99f", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b", Pod:"csi-node-driver-fqbfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califae63bc47b0", MAC:"0a:01:74:4f:c6:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:15.576446 containerd[1571]: 2025-09-13 00:01:15.571 [INFO][5023] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b" Namespace="calico-system" Pod="csi-node-driver-fqbfr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:01:15.597363 containerd[1571]: time="2025-09-13T00:01:15.597225288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:15.597363 containerd[1571]: time="2025-09-13T00:01:15.597285945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:15.597363 containerd[1571]: time="2025-09-13T00:01:15.597301384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:15.597626 containerd[1571]: time="2025-09-13T00:01:15.597407568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:15.627671 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:01:15.643360 containerd[1571]: time="2025-09-13T00:01:15.643296047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fqbfr,Uid:ba4c914c-84cd-4650-afa0-e3d23d56f99f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b\"" Sep 13 00:01:15.688857 systemd[1]: run-netns-cni\x2df73e2db3\x2dd882\x2dffc8\x2dae4e\x2da52ba86417fa.mount: Deactivated successfully. Sep 13 00:01:16.782459 systemd-networkd[1247]: cali273ab2ab71e: Gained IPv6LL Sep 13 00:01:16.910497 systemd-networkd[1247]: cali2917110676a: Gained IPv6LL Sep 13 00:01:17.313775 containerd[1571]: time="2025-09-13T00:01:17.313631694Z" level=info msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\"" Sep 13 00:01:17.486376 systemd-networkd[1247]: califae63bc47b0: Gained IPv6LL Sep 13 00:01:17.581540 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:55204.service - OpenSSH per-connection server daemon (10.0.0.1:55204). Sep 13 00:01:17.628316 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 55204 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:17.630513 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:17.635944 systemd-logind[1545]: New session 17 of user core. Sep 13 00:01:17.648655 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.835 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.836 [INFO][5109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" iface="eth0" netns="/var/run/netns/cni-a4eb0241-5d32-858a-d596-95e9220db1e3" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.836 [INFO][5109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" iface="eth0" netns="/var/run/netns/cni-a4eb0241-5d32-858a-d596-95e9220db1e3" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.836 [INFO][5109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" iface="eth0" netns="/var/run/netns/cni-a4eb0241-5d32-858a-d596-95e9220db1e3" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.836 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.836 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.862 [INFO][5130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.862 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.862 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.901 [WARNING][5130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.901 [INFO][5130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.903 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:17.915300 containerd[1571]: 2025-09-13 00:01:17.910 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:01:17.915505 sshd[5116]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:17.917262 containerd[1571]: time="2025-09-13T00:01:17.917190687Z" level=info msg="TearDown network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" successfully" Sep 13 00:01:17.917525 containerd[1571]: time="2025-09-13T00:01:17.917375751Z" level=info msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" returns successfully" Sep 13 00:01:17.918778 systemd[1]: run-netns-cni\x2da4eb0241\x2d5d32\x2d858a\x2dd596\x2d95e9220db1e3.mount: Deactivated successfully. Sep 13 00:01:17.919810 containerd[1571]: time="2025-09-13T00:01:17.919727327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wc4bn,Uid:7f317c9d-be74-45c8-9172-3b8f509591a6,Namespace:calico-system,Attempt:1,}" Sep 13 00:01:17.923428 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:55204.service: Deactivated successfully. Sep 13 00:01:17.929521 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:01:17.931009 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:01:17.932510 systemd-logind[1545]: Removed session 17. Sep 13 00:01:18.607143 systemd-networkd[1247]: cali63cb5088a27: Link UP Sep 13 00:01:18.607423 systemd-networkd[1247]: cali63cb5088a27: Gained carrier Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.537 [INFO][5140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--wc4bn-eth0 goldmane-7988f88666- calico-system 7f317c9d-be74-45c8-9172-3b8f509591a6 1185 0 2025-09-13 00:00:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-wc4bn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali63cb5088a27 [] [] }} ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.537 [INFO][5140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.562 [INFO][5155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" HandleID="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.562 [INFO][5155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" HandleID="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-wc4bn", "timestamp":"2025-09-13 00:01:18.562394219 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.562 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.563 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.563 [INFO][5155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.569 [INFO][5155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.573 [INFO][5155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.580 [INFO][5155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.581 [INFO][5155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.585 [INFO][5155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.585 [INFO][5155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.587 [INFO][5155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679 Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.592 [INFO][5155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.600 [INFO][5155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.600 [INFO][5155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" host="localhost" Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.600 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:18.648863 containerd[1571]: 2025-09-13 00:01:18.600 [INFO][5155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" HandleID="k8s-pod-network.4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:18.650389 containerd[1571]: 2025-09-13 00:01:18.603 [INFO][5140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--wc4bn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f317c9d-be74-45c8-9172-3b8f509591a6", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-wc4bn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63cb5088a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:18.650389 containerd[1571]: 2025-09-13 00:01:18.603 [INFO][5140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:18.650389 containerd[1571]: 2025-09-13 00:01:18.603 [INFO][5140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63cb5088a27 ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:18.650389 containerd[1571]: 2025-09-13 00:01:18.605 [INFO][5140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:18.650389 containerd[1571]: 2025-09-13 00:01:18.606 [INFO][5140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--wc4bn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f317c9d-be74-45c8-9172-3b8f509591a6", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679", Pod:"goldmane-7988f88666-wc4bn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63cb5088a27", MAC:"da:3b:be:e3:7d:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:18.650389 containerd[1571]: 2025-09-13 00:01:18.644 [INFO][5140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679" Namespace="calico-system" Pod="goldmane-7988f88666-wc4bn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:01:18.708186 containerd[1571]: time="2025-09-13T00:01:18.708075634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:18.708186 containerd[1571]: time="2025-09-13T00:01:18.708137043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:18.708186 containerd[1571]: time="2025-09-13T00:01:18.708148274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:18.708430 containerd[1571]: time="2025-09-13T00:01:18.708267101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:18.742510 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:01:18.783274 containerd[1571]: time="2025-09-13T00:01:18.783210945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wc4bn,Uid:7f317c9d-be74-45c8-9172-3b8f509591a6,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679\"" Sep 13 00:01:19.313815 containerd[1571]: time="2025-09-13T00:01:19.313748350Z" level=info msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\"" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.515 [INFO][5225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.515 [INFO][5225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" iface="eth0" netns="/var/run/netns/cni-1b2850fd-65b9-41c1-d5fb-b167929d49dc" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.515 [INFO][5225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" iface="eth0" netns="/var/run/netns/cni-1b2850fd-65b9-41c1-d5fb-b167929d49dc" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.516 [INFO][5225] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" iface="eth0" netns="/var/run/netns/cni-1b2850fd-65b9-41c1-d5fb-b167929d49dc" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.516 [INFO][5225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.516 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.545 [INFO][5235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.545 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.545 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.637 [WARNING][5235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.637 [INFO][5235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.639 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:19.647177 containerd[1571]: 2025-09-13 00:01:19.643 [INFO][5225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:01:19.650458 containerd[1571]: time="2025-09-13T00:01:19.650403192Z" level=info msg="TearDown network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" successfully" Sep 13 00:01:19.650458 containerd[1571]: time="2025-09-13T00:01:19.650443260Z" level=info msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" returns successfully" Sep 13 00:01:19.652132 containerd[1571]: time="2025-09-13T00:01:19.651397075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb7ff6b6-87ccw,Uid:d3c5674c-deb9-482b-8272-20808de1f68c,Namespace:calico-system,Attempt:1,}" Sep 13 00:01:19.651520 systemd[1]: run-netns-cni\x2d1b2850fd\x2d65b9\x2d41c1\x2dd5fb\x2db167929d49dc.mount: Deactivated successfully. Sep 13 00:01:19.919679 systemd-networkd[1247]: cali63cb5088a27: Gained IPv6LL Sep 13 00:01:20.316346 containerd[1571]: time="2025-09-13T00:01:20.316290080Z" level=info msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\"" Sep 13 00:01:20.506897 systemd-networkd[1247]: caliaa9704769f6: Link UP Sep 13 00:01:20.508644 systemd-networkd[1247]: caliaa9704769f6: Gained carrier Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.474 [INFO][5282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.475 [INFO][5282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" iface="eth0" netns="/var/run/netns/cni-38580430-6254-0407-df18-6c5f9d33a6a0" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.477 [INFO][5282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" iface="eth0" netns="/var/run/netns/cni-38580430-6254-0407-df18-6c5f9d33a6a0" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.477 [INFO][5282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" iface="eth0" netns="/var/run/netns/cni-38580430-6254-0407-df18-6c5f9d33a6a0" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.477 [INFO][5282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.477 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.520 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.521 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.521 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.529 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.529 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.532 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:20.544628 containerd[1571]: 2025-09-13 00:01:20.539 [INFO][5282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.271 [INFO][5248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0 calico-kube-controllers-9cb7ff6b6- calico-system d3c5674c-deb9-482b-8272-20808de1f68c 1194 0 2025-09-13 00:00:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9cb7ff6b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9cb7ff6b6-87ccw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa9704769f6 [] [] }} ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.271 [INFO][5248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.298 [INFO][5263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" HandleID="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.299 [INFO][5263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" HandleID="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9cb7ff6b6-87ccw", "timestamp":"2025-09-13 00:01:20.298975191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.299 [INFO][5263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.299 [INFO][5263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.299 [INFO][5263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.309 [INFO][5263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.315 [INFO][5263] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.375 [INFO][5263] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.377 [INFO][5263] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.380 [INFO][5263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.380 [INFO][5263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.381 [INFO][5263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.473 [INFO][5263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.484 [INFO][5263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.484 [INFO][5263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" host="localhost" Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.485 [INFO][5263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:20.545982 containerd[1571]: 2025-09-13 00:01:20.485 [INFO][5263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" HandleID="k8s-pod-network.2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:20.546891 containerd[1571]: 2025-09-13 00:01:20.490 [INFO][5248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0", GenerateName:"calico-kube-controllers-9cb7ff6b6-", Namespace:"calico-system", SelfLink:"", UID:"d3c5674c-deb9-482b-8272-20808de1f68c", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb7ff6b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9cb7ff6b6-87ccw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa9704769f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:20.546891 containerd[1571]: 2025-09-13 00:01:20.490 [INFO][5248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:20.546891 containerd[1571]: 2025-09-13 00:01:20.490 [INFO][5248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa9704769f6 ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:20.546891 containerd[1571]: 2025-09-13 00:01:20.515 [INFO][5248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:20.546891 containerd[1571]: 2025-09-13 00:01:20.522 [INFO][5248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0", GenerateName:"calico-kube-controllers-9cb7ff6b6-", Namespace:"calico-system", SelfLink:"", UID:"d3c5674c-deb9-482b-8272-20808de1f68c", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb7ff6b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa", Pod:"calico-kube-controllers-9cb7ff6b6-87ccw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa9704769f6", MAC:"02:51:55:0f:f6:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:20.546891 containerd[1571]: 2025-09-13 00:01:20.536 [INFO][5248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa" Namespace="calico-system" Pod="calico-kube-controllers-9cb7ff6b6-87ccw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:01:20.546891 containerd[1571]: time="2025-09-13T00:01:20.546285116Z" level=info msg="TearDown network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" successfully" Sep 13 00:01:20.546891 containerd[1571]: time="2025-09-13T00:01:20.546324873Z" level=info msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" returns successfully" Sep 13 00:01:20.547285 kubelet[2662]: E0913 00:01:20.546765 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:20.549830 containerd[1571]: time="2025-09-13T00:01:20.549392440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tb5v,Uid:d4868fae-b2aa-4106-8bb0-b85ce9739178,Namespace:kube-system,Attempt:1,}" Sep 13 00:01:20.557450 systemd[1]: run-netns-cni\x2d38580430\x2d6254\x2d0407\x2ddf18\x2d6c5f9d33a6a0.mount: Deactivated successfully. Sep 13 00:01:20.592703 containerd[1571]: time="2025-09-13T00:01:20.591724088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:20.592703 containerd[1571]: time="2025-09-13T00:01:20.591794604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:20.592703 containerd[1571]: time="2025-09-13T00:01:20.591820153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:20.592703 containerd[1571]: time="2025-09-13T00:01:20.591971001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:20.626427 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:01:20.666825 containerd[1571]: time="2025-09-13T00:01:20.666774059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb7ff6b6-87ccw,Uid:d3c5674c-deb9-482b-8272-20808de1f68c,Namespace:calico-system,Attempt:1,} returns sandbox id \"2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa\"" Sep 13 00:01:21.313135 kubelet[2662]: E0913 00:01:21.313095 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:21.314255 containerd[1571]: time="2025-09-13T00:01:21.314162810Z" level=info msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\"" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.440 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.441 [INFO][5365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" iface="eth0" netns="/var/run/netns/cni-3b9a6aea-4050-c2d5-94f6-5545a58be7a7" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.441 [INFO][5365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" iface="eth0" netns="/var/run/netns/cni-3b9a6aea-4050-c2d5-94f6-5545a58be7a7" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.442 [INFO][5365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" iface="eth0" netns="/var/run/netns/cni-3b9a6aea-4050-c2d5-94f6-5545a58be7a7" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.442 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.442 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.460 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.461 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.461 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.495 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.495 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.497 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:21.503071 containerd[1571]: 2025-09-13 00:01:21.500 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:01:21.503613 containerd[1571]: time="2025-09-13T00:01:21.503278478Z" level=info msg="TearDown network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" successfully" Sep 13 00:01:21.503613 containerd[1571]: time="2025-09-13T00:01:21.503308105Z" level=info msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" returns successfully" Sep 13 00:01:21.506392 kubelet[2662]: E0913 00:01:21.506362 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:21.506691 systemd[1]: run-netns-cni\x2d3b9a6aea\x2d4050\x2dc2d5\x2d94f6\x2d5545a58be7a7.mount: Deactivated successfully. Sep 13 00:01:21.508231 containerd[1571]: time="2025-09-13T00:01:21.506786575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-98zzs,Uid:4e3bb006-8c83-4e24-90c2-984bdd355c97,Namespace:kube-system,Attempt:1,}" Sep 13 00:01:21.850095 systemd-networkd[1247]: cali283a82838fb: Link UP Sep 13 00:01:21.850997 systemd-networkd[1247]: cali283a82838fb: Gained carrier Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.680 [INFO][5382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0 coredns-7c65d6cfc9- kube-system d4868fae-b2aa-4106-8bb0-b85ce9739178 1200 0 2025-09-13 00:00:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7tb5v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali283a82838fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.680 [INFO][5382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.752 [INFO][5397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" HandleID="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.753 [INFO][5397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" HandleID="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000436050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7tb5v", "timestamp":"2025-09-13 00:01:21.752826941 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.753 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.753 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.753 [INFO][5397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.759 [INFO][5397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.763 [INFO][5397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.766 [INFO][5397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.768 [INFO][5397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.795 [INFO][5397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.795 [INFO][5397] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.796 [INFO][5397] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18 Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.807 [INFO][5397] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.844 [INFO][5397] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.844 [INFO][5397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" host="localhost" Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.844 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:21.881061 containerd[1571]: 2025-09-13 00:01:21.844 [INFO][5397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" HandleID="k8s-pod-network.b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:21.882163 containerd[1571]: 2025-09-13 00:01:21.847 [INFO][5382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d4868fae-b2aa-4106-8bb0-b85ce9739178", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7tb5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283a82838fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:21.882163 containerd[1571]: 2025-09-13 00:01:21.847 [INFO][5382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:21.882163 containerd[1571]: 2025-09-13 00:01:21.847 [INFO][5382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali283a82838fb ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:21.882163 containerd[1571]: 2025-09-13 00:01:21.850 [INFO][5382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:21.882163 containerd[1571]: 2025-09-13 00:01:21.851 [INFO][5382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d4868fae-b2aa-4106-8bb0-b85ce9739178", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18", Pod:"coredns-7c65d6cfc9-7tb5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283a82838fb", MAC:"12:52:05:fd:c7:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:21.882163 containerd[1571]: 2025-09-13 00:01:21.877 [INFO][5382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7tb5v" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:01:22.222384 systemd-networkd[1247]: caliaa9704769f6: Gained IPv6LL Sep 13 00:01:22.312954 kubelet[2662]: E0913 00:01:22.312898 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:22.592766 containerd[1571]: time="2025-09-13T00:01:22.592154246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:22.592766 containerd[1571]: time="2025-09-13T00:01:22.592543440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:22.592766 containerd[1571]: time="2025-09-13T00:01:22.592564090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:22.592766 containerd[1571]: time="2025-09-13T00:01:22.592703326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:22.635232 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:01:22.643187 containerd[1571]: time="2025-09-13T00:01:22.643104245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:22.645588 containerd[1571]: time="2025-09-13T00:01:22.645524841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:01:22.647722 containerd[1571]: time="2025-09-13T00:01:22.647660712Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:22.652694 containerd[1571]: time="2025-09-13T00:01:22.652639326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:22.653843 containerd[1571]: time="2025-09-13T00:01:22.653784203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 7.322953712s" Sep 13 00:01:22.653843 containerd[1571]: time="2025-09-13T00:01:22.653831865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:01:22.658061 containerd[1571]: time="2025-09-13T00:01:22.657756445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:01:22.658935 containerd[1571]: time="2025-09-13T00:01:22.658909678Z" level=info msg="CreateContainer within sandbox \"f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:01:22.684859 containerd[1571]: time="2025-09-13T00:01:22.684801905Z" level=info msg="CreateContainer within sandbox \"f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6a5a14846eefb707b1060d59d5cc0800d4a0aff071284912f680752e95a82ee9\"" Sep 13 00:01:22.688497 containerd[1571]: time="2025-09-13T00:01:22.688464904Z" level=info msg="StartContainer for \"6a5a14846eefb707b1060d59d5cc0800d4a0aff071284912f680752e95a82ee9\"" Sep 13 00:01:22.689498 containerd[1571]: time="2025-09-13T00:01:22.689451580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7tb5v,Uid:d4868fae-b2aa-4106-8bb0-b85ce9739178,Namespace:kube-system,Attempt:1,} returns sandbox id \"b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18\"" Sep 13 00:01:22.690648 kubelet[2662]: E0913 00:01:22.690577 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:22.696629 containerd[1571]: time="2025-09-13T00:01:22.696577727Z" level=info msg="CreateContainer within sandbox \"b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:01:22.724934 containerd[1571]: time="2025-09-13T00:01:22.724879568Z" level=info msg="CreateContainer within sandbox \"b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fce23d0facc93865297374a3b9ccb8f02b2ac12de92d71dce2268c24c19dc11\"" Sep 13 00:01:22.727077 containerd[1571]: time="2025-09-13T00:01:22.726226622Z" level=info msg="StartContainer for \"6fce23d0facc93865297374a3b9ccb8f02b2ac12de92d71dce2268c24c19dc11\"" Sep 13 00:01:22.805880 containerd[1571]: time="2025-09-13T00:01:22.805820756Z" level=info msg="StartContainer for \"6a5a14846eefb707b1060d59d5cc0800d4a0aff071284912f680752e95a82ee9\" returns successfully" Sep 13 00:01:22.825679 containerd[1571]: time="2025-09-13T00:01:22.825611414Z" level=info msg="StartContainer for \"6fce23d0facc93865297374a3b9ccb8f02b2ac12de92d71dce2268c24c19dc11\" returns successfully" Sep 13 00:01:22.863518 systemd-networkd[1247]: cali93f5ba53e98: Link UP Sep 13 00:01:22.863833 systemd-networkd[1247]: cali93f5ba53e98: Gained carrier Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.712 [INFO][5452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0 coredns-7c65d6cfc9- kube-system 4e3bb006-8c83-4e24-90c2-984bdd355c97 1209 0 2025-09-13 00:00:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-98zzs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali93f5ba53e98 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.712 [INFO][5452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.773 [INFO][5493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" HandleID="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.773 [INFO][5493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" HandleID="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000117900), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-98zzs", "timestamp":"2025-09-13 00:01:22.773422167 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.773 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.773 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.773 [INFO][5493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.812 [INFO][5493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.818 [INFO][5493] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.824 [INFO][5493] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.827 [INFO][5493] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.832 [INFO][5493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.832 [INFO][5493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.838 [INFO][5493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.842 [INFO][5493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.851 [INFO][5493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.852 [INFO][5493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" host="localhost" Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.852 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:22.882135 containerd[1571]: 2025-09-13 00:01:22.852 [INFO][5493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" HandleID="k8s-pod-network.ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:22.883332 containerd[1571]: 2025-09-13 00:01:22.856 [INFO][5452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e3bb006-8c83-4e24-90c2-984bdd355c97", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-98zzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f5ba53e98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:22.883332 containerd[1571]: 2025-09-13 00:01:22.856 [INFO][5452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:22.883332 containerd[1571]: 2025-09-13 00:01:22.856 [INFO][5452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93f5ba53e98 ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:22.883332 containerd[1571]: 2025-09-13 00:01:22.863 [INFO][5452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:22.883332 containerd[1571]: 2025-09-13 00:01:22.863 [INFO][5452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e3bb006-8c83-4e24-90c2-984bdd355c97", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d", Pod:"coredns-7c65d6cfc9-98zzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f5ba53e98", MAC:"32:4f:db:98:ce:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:22.883332 containerd[1571]: 2025-09-13 00:01:22.874 [INFO][5452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-98zzs" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:01:22.908475 containerd[1571]: time="2025-09-13T00:01:22.908208737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:22.908475 containerd[1571]: time="2025-09-13T00:01:22.908289442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:22.908475 containerd[1571]: time="2025-09-13T00:01:22.908377319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:22.910011 containerd[1571]: time="2025-09-13T00:01:22.909940095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:22.925109 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:50692.service - OpenSSH per-connection server daemon (10.0.0.1:50692). Sep 13 00:01:22.956419 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:01:22.965709 kubelet[2662]: E0913 00:01:22.965666 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:23.000079 kubelet[2662]: I0913 00:01:22.999857 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57d7979648-crgkm" podStartSLOduration=56.67378357 podStartE2EDuration="1m3.999828822s" podCreationTimestamp="2025-09-13 00:00:19 +0000 UTC" firstStartedPulling="2025-09-13 00:01:15.330461905 +0000 UTC m=+73.124310351" lastFinishedPulling="2025-09-13 00:01:22.656507157 +0000 UTC m=+80.450355603" observedRunningTime="2025-09-13 00:01:22.99525436 +0000 UTC m=+80.789102806" watchObservedRunningTime="2025-09-13 00:01:22.999828822 +0000 UTC m=+80.793677288" Sep 13 00:01:23.016566 kubelet[2662]: I0913 00:01:23.015958 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7tb5v" podStartSLOduration=75.015919837 podStartE2EDuration="1m15.015919837s" podCreationTimestamp="2025-09-13 00:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:23.015222404 +0000 UTC m=+80.809070850" watchObservedRunningTime="2025-09-13 00:01:23.015919837 +0000 UTC m=+80.809768283" Sep 13 00:01:23.029140 sshd[5596]: Accepted publickey for core from 10.0.0.1 port 50692 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:23.031255 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:23.035238 containerd[1571]: time="2025-09-13T00:01:23.035160105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-98zzs,Uid:4e3bb006-8c83-4e24-90c2-984bdd355c97,Namespace:kube-system,Attempt:1,} returns sandbox id \"ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d\"" Sep 13 00:01:23.041072 kubelet[2662]: E0913 00:01:23.039207 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:23.042223 systemd-logind[1545]: New session 18 of user core. Sep 13 00:01:23.048362 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:01:23.052185 containerd[1571]: time="2025-09-13T00:01:23.047018024Z" level=info msg="CreateContainer within sandbox \"ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:01:23.080103 containerd[1571]: time="2025-09-13T00:01:23.080021931Z" level=info msg="CreateContainer within sandbox \"ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e96518a7f7c7a7e50e4591746ea0926e9632e375c39c1574c6a8880f909cd483\"" Sep 13 00:01:23.083075 containerd[1571]: time="2025-09-13T00:01:23.082398500Z" level=info msg="StartContainer for \"e96518a7f7c7a7e50e4591746ea0926e9632e375c39c1574c6a8880f909cd483\"" Sep 13 00:01:23.173226 containerd[1571]: time="2025-09-13T00:01:23.173090972Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:23.179250 containerd[1571]: time="2025-09-13T00:01:23.177653647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:01:23.186226 containerd[1571]: time="2025-09-13T00:01:23.180871482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 523.066526ms" Sep 13 00:01:23.186404 containerd[1571]: time="2025-09-13T00:01:23.186352502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:01:23.193066 containerd[1571]: time="2025-09-13T00:01:23.190258121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:01:23.195059 containerd[1571]: time="2025-09-13T00:01:23.193415652Z" level=info msg="CreateContainer within sandbox \"075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:01:23.246296 systemd-networkd[1247]: cali283a82838fb: Gained IPv6LL Sep 13 00:01:23.269113 containerd[1571]: time="2025-09-13T00:01:23.269024354Z" level=info msg="CreateContainer within sandbox \"075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a5ddb42b20fecd7e26797af62fc7f682aecea0d6066c03fa5ded488409526f2f\"" Sep 13 00:01:23.275092 containerd[1571]: time="2025-09-13T00:01:23.273574645Z" level=info msg="StartContainer for \"e96518a7f7c7a7e50e4591746ea0926e9632e375c39c1574c6a8880f909cd483\" returns successfully" Sep 13 00:01:23.276983 containerd[1571]: time="2025-09-13T00:01:23.275438385Z" level=info msg="StartContainer for \"a5ddb42b20fecd7e26797af62fc7f682aecea0d6066c03fa5ded488409526f2f\"" Sep 13 00:01:23.345954 sshd[5596]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:23.352373 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:50692.service: Deactivated successfully. Sep 13 00:01:23.356926 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:01:23.359030 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:01:23.360402 systemd-logind[1545]: Removed session 18. Sep 13 00:01:23.423974 containerd[1571]: time="2025-09-13T00:01:23.423851302Z" level=info msg="StartContainer for \"a5ddb42b20fecd7e26797af62fc7f682aecea0d6066c03fa5ded488409526f2f\" returns successfully" Sep 13 00:01:23.974279 kubelet[2662]: I0913 00:01:23.974181 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:01:23.976795 kubelet[2662]: E0913 00:01:23.975030 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:23.976972 kubelet[2662]: E0913 00:01:23.976771 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:24.001542 kubelet[2662]: I0913 00:01:24.001477 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57d7979648-xww86" podStartSLOduration=57.241175032 podStartE2EDuration="1m5.001452847s" podCreationTimestamp="2025-09-13 00:00:19 +0000 UTC" firstStartedPulling="2025-09-13 00:01:15.427963212 +0000 UTC m=+73.221811659" lastFinishedPulling="2025-09-13 00:01:23.188241028 +0000 UTC m=+80.982089474" observedRunningTime="2025-09-13 00:01:23.98925548 +0000 UTC m=+81.783103926" watchObservedRunningTime="2025-09-13 00:01:24.001452847 +0000 UTC m=+81.795301293" Sep 13 00:01:24.024932 kubelet[2662]: I0913 00:01:24.024792 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-98zzs" podStartSLOduration=76.02476857 podStartE2EDuration="1m16.02476857s" podCreationTimestamp="2025-09-13 00:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:24.021542912 +0000 UTC m=+81.815391358" watchObservedRunningTime="2025-09-13 00:01:24.02476857 +0000 UTC m=+81.818617016" Sep 13 00:01:24.782266 systemd-networkd[1247]: cali93f5ba53e98: Gained IPv6LL Sep 13 00:01:24.976454 kubelet[2662]: E0913 00:01:24.975977 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:24.976454 kubelet[2662]: E0913 00:01:24.976149 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:25.980596 kubelet[2662]: E0913 00:01:25.980551 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:26.982605 kubelet[2662]: E0913 00:01:26.982566 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:28.062504 kubelet[2662]: I0913 00:01:28.062237 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:01:28.358436 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:50706.service - OpenSSH per-connection server daemon (10.0.0.1:50706). Sep 13 00:01:28.401274 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 50706 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:28.403568 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:28.408127 systemd-logind[1545]: New session 19 of user core. Sep 13 00:01:28.415681 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:01:28.566940 sshd[5729]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:28.573301 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:50706.service: Deactivated successfully. Sep 13 00:01:28.578441 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:01:28.579679 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:01:28.581229 systemd-logind[1545]: Removed session 19. Sep 13 00:01:29.611269 containerd[1571]: time="2025-09-13T00:01:29.611179369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:29.612418 containerd[1571]: time="2025-09-13T00:01:29.612334840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:01:29.615141 containerd[1571]: time="2025-09-13T00:01:29.615081223Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:29.618945 containerd[1571]: time="2025-09-13T00:01:29.618854071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:29.619903 containerd[1571]: time="2025-09-13T00:01:29.619853996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 6.429557021s" Sep 13 00:01:29.619903 containerd[1571]: time="2025-09-13T00:01:29.619893722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:01:29.622291 containerd[1571]: time="2025-09-13T00:01:29.622189776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:01:29.623570 containerd[1571]: time="2025-09-13T00:01:29.623518718Z" level=info msg="CreateContainer within sandbox \"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:01:29.647945 containerd[1571]: time="2025-09-13T00:01:29.647856498Z" level=info msg="CreateContainer within sandbox \"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ab9c7b01c8de9a6b274ef4583bda4859fe32856b62d104ba07a2ab15a8ff15d\"" Sep 13 00:01:29.648905 containerd[1571]: time="2025-09-13T00:01:29.648779977Z" level=info msg="StartContainer for \"5ab9c7b01c8de9a6b274ef4583bda4859fe32856b62d104ba07a2ab15a8ff15d\"" Sep 13 00:01:29.804637 containerd[1571]: time="2025-09-13T00:01:29.804496797Z" level=info msg="StartContainer for \"5ab9c7b01c8de9a6b274ef4583bda4859fe32856b62d104ba07a2ab15a8ff15d\" returns successfully" Sep 13 00:01:33.597246 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:56744.service - OpenSSH per-connection server daemon (10.0.0.1:56744). Sep 13 00:01:33.713637 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 56744 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:33.715406 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:33.747364 systemd-logind[1545]: New session 20 of user core. Sep 13 00:01:33.764976 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:01:34.241243 sshd[5789]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:34.274098 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:56744.service: Deactivated successfully. Sep 13 00:01:34.291622 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:01:34.293713 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:01:34.307274 systemd-logind[1545]: Removed session 20. Sep 13 00:01:35.539069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504583596.mount: Deactivated successfully. Sep 13 00:01:36.671112 containerd[1571]: time="2025-09-13T00:01:36.671006811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:36.672531 containerd[1571]: time="2025-09-13T00:01:36.672483649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:01:36.674996 containerd[1571]: time="2025-09-13T00:01:36.674915792Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:36.677744 containerd[1571]: time="2025-09-13T00:01:36.677670698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:36.678643 containerd[1571]: time="2025-09-13T00:01:36.678575948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 7.056319265s" Sep 13 00:01:36.678643 containerd[1571]: time="2025-09-13T00:01:36.678638587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:01:36.679703 containerd[1571]: time="2025-09-13T00:01:36.679674436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:01:36.681117 containerd[1571]: time="2025-09-13T00:01:36.681086290Z" level=info msg="CreateContainer within sandbox \"4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:01:36.710292 containerd[1571]: time="2025-09-13T00:01:36.710219254Z" level=info msg="CreateContainer within sandbox \"4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d9a6a4994c89fcf14207181e08591779bc26cd7784277f47b640697c0117cdb7\"" Sep 13 00:01:36.712333 containerd[1571]: time="2025-09-13T00:01:36.712294579Z" level=info msg="StartContainer for \"d9a6a4994c89fcf14207181e08591779bc26cd7784277f47b640697c0117cdb7\"" Sep 13 00:01:36.812809 containerd[1571]: time="2025-09-13T00:01:36.812742807Z" level=info msg="StartContainer for \"d9a6a4994c89fcf14207181e08591779bc26cd7784277f47b640697c0117cdb7\" returns successfully" Sep 13 00:01:37.699393 systemd[1]: run-containerd-runc-k8s.io-d9a6a4994c89fcf14207181e08591779bc26cd7784277f47b640697c0117cdb7-runc.ANHwBs.mount: Deactivated successfully. Sep 13 00:01:38.312983 kubelet[2662]: E0913 00:01:38.312883 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:01:39.249442 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:56748.service - OpenSSH per-connection server daemon (10.0.0.1:56748). Sep 13 00:01:39.347613 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 56748 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:39.349715 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:39.354694 systemd-logind[1545]: New session 21 of user core. Sep 13 00:01:39.362388 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:01:39.773392 sshd[5908]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:39.783534 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:56758.service - OpenSSH per-connection server daemon (10.0.0.1:56758). Sep 13 00:01:39.784121 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:56748.service: Deactivated successfully. Sep 13 00:01:39.789548 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:01:39.791962 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:01:39.794321 systemd-logind[1545]: Removed session 21. Sep 13 00:01:39.829316 sshd[5921]: Accepted publickey for core from 10.0.0.1 port 56758 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:39.831412 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:39.837567 systemd-logind[1545]: New session 22 of user core. Sep 13 00:01:39.844370 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:01:41.128874 sshd[5921]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:41.139340 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:34888.service - OpenSSH per-connection server daemon (10.0.0.1:34888). Sep 13 00:01:41.139866 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:56758.service: Deactivated successfully. Sep 13 00:01:41.147556 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:01:41.149083 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:01:41.151338 systemd-logind[1545]: Removed session 22. Sep 13 00:01:41.202884 sshd[5938]: Accepted publickey for core from 10.0.0.1 port 34888 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:41.205978 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:41.216052 systemd-logind[1545]: New session 23 of user core. Sep 13 00:01:41.221903 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:01:42.253416 containerd[1571]: time="2025-09-13T00:01:42.253356578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:42.254753 containerd[1571]: time="2025-09-13T00:01:42.254632418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:01:42.257934 containerd[1571]: time="2025-09-13T00:01:42.257901752Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:42.260898 containerd[1571]: time="2025-09-13T00:01:42.260864404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:42.261747 containerd[1571]: time="2025-09-13T00:01:42.261457880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.581749019s" Sep 13 00:01:42.261747 containerd[1571]: time="2025-09-13T00:01:42.261499299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:01:42.303491 containerd[1571]: time="2025-09-13T00:01:42.303180765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:01:42.453878 containerd[1571]: time="2025-09-13T00:01:42.453832234Z" level=info msg="CreateContainer within sandbox \"2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:01:42.510221 systemd-resolved[1459]: Under memory pressure, flushing caches. Sep 13 00:01:42.522541 systemd-journald[1149]: Under memory pressure, flushing caches. Sep 13 00:01:42.510277 systemd-resolved[1459]: Flushed all caches. Sep 13 00:01:42.613011 containerd[1571]: time="2025-09-13T00:01:42.612825893Z" level=info msg="CreateContainer within sandbox \"2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f2dd52cddd9a07296f599ca2a7b9ea4965f5afef9d24900bd2ec7e28e6c8dd8d\"" Sep 13 00:01:42.615931 containerd[1571]: time="2025-09-13T00:01:42.615906579Z" level=info msg="StartContainer for \"f2dd52cddd9a07296f599ca2a7b9ea4965f5afef9d24900bd2ec7e28e6c8dd8d\"" Sep 13 00:01:42.820765 containerd[1571]: time="2025-09-13T00:01:42.820603976Z" level=info msg="StartContainer for \"f2dd52cddd9a07296f599ca2a7b9ea4965f5afef9d24900bd2ec7e28e6c8dd8d\" returns successfully" Sep 13 00:01:43.106736 kubelet[2662]: I0913 00:01:43.106379 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-wc4bn" podStartSLOduration=64.208467098 podStartE2EDuration="1m22.102659252s" podCreationTimestamp="2025-09-13 00:00:21 +0000 UTC" firstStartedPulling="2025-09-13 00:01:18.785322599 +0000 UTC m=+76.579171045" lastFinishedPulling="2025-09-13 00:01:36.679514753 +0000 UTC m=+94.473363199" observedRunningTime="2025-09-13 00:01:37.132979374 +0000 UTC m=+94.926827820" watchObservedRunningTime="2025-09-13 00:01:43.102659252 +0000 UTC m=+100.896507698" Sep 13 00:01:43.106736 kubelet[2662]: I0913 00:01:43.106690 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9cb7ff6b6-87ccw" podStartSLOduration=58.478659343 podStartE2EDuration="1m20.106677526s" podCreationTimestamp="2025-09-13 00:00:23 +0000 UTC" firstStartedPulling="2025-09-13 00:01:20.668405349 +0000 UTC m=+78.462253795" lastFinishedPulling="2025-09-13 00:01:42.296423532 +0000 UTC m=+100.090271978" observedRunningTime="2025-09-13 00:01:43.101409111 +0000 UTC m=+100.895257557" watchObservedRunningTime="2025-09-13 00:01:43.106677526 +0000 UTC m=+100.900525972" Sep 13 00:01:44.503579 sshd[5938]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:44.517413 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:34890.service - OpenSSH per-connection server daemon (10.0.0.1:34890). Sep 13 00:01:44.517967 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:34888.service: Deactivated successfully. Sep 13 00:01:44.522475 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:01:44.522717 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:01:44.523741 systemd-logind[1545]: Removed session 23. Sep 13 00:01:44.561133 systemd-journald[1149]: Under memory pressure, flushing caches. Sep 13 00:01:44.559514 systemd-resolved[1459]: Under memory pressure, flushing caches. Sep 13 00:01:44.559548 systemd-resolved[1459]: Flushed all caches. Sep 13 00:01:44.595614 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:44.597621 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:44.602775 systemd-logind[1545]: New session 24 of user core. Sep 13 00:01:44.612459 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:01:45.437200 sshd[6073]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:45.450118 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:34904.service - OpenSSH per-connection server daemon (10.0.0.1:34904). Sep 13 00:01:45.450860 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:34890.service: Deactivated successfully. Sep 13 00:01:45.462956 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:01:45.468188 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:01:45.470098 systemd-logind[1545]: Removed session 24. Sep 13 00:01:45.499543 sshd[6092]: Accepted publickey for core from 10.0.0.1 port 34904 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:45.503615 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:45.509689 systemd-logind[1545]: New session 25 of user core. Sep 13 00:01:45.515386 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:01:45.694352 sshd[6092]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:45.698779 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:34904.service: Deactivated successfully. Sep 13 00:01:45.708300 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:01:45.709471 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:01:45.710692 systemd-logind[1545]: Removed session 25. Sep 13 00:01:45.744482 containerd[1571]: time="2025-09-13T00:01:45.744406032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:45.749233 containerd[1571]: time="2025-09-13T00:01:45.749156720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:01:45.757846 containerd[1571]: time="2025-09-13T00:01:45.757804919Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:45.766265 containerd[1571]: time="2025-09-13T00:01:45.766200560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:45.766911 containerd[1571]: time="2025-09-13T00:01:45.766876742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.463648557s" Sep 13 00:01:45.766960 containerd[1571]: time="2025-09-13T00:01:45.766915556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:01:45.770741 containerd[1571]: time="2025-09-13T00:01:45.770692387Z" level=info msg="CreateContainer within sandbox \"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:01:45.805513 containerd[1571]: time="2025-09-13T00:01:45.805444457Z" level=info msg="CreateContainer within sandbox \"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6bd2ce60c102dcc487ca39b5ba2947285e0a2af860c006c96970c3f1fda2e8a3\"" Sep 13 00:01:45.807540 containerd[1571]: time="2025-09-13T00:01:45.806355715Z" level=info msg="StartContainer for \"6bd2ce60c102dcc487ca39b5ba2947285e0a2af860c006c96970c3f1fda2e8a3\"" Sep 13 00:01:45.916111 containerd[1571]: time="2025-09-13T00:01:45.916035686Z" level=info msg="StartContainer for \"6bd2ce60c102dcc487ca39b5ba2947285e0a2af860c006c96970c3f1fda2e8a3\" returns successfully" Sep 13 00:01:46.470678 kubelet[2662]: I0913 00:01:46.470620 2662 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:01:46.470678 kubelet[2662]: I0913 00:01:46.470670 2662 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:01:49.979892 systemd[1]: run-containerd-runc-k8s.io-f2dd52cddd9a07296f599ca2a7b9ea4965f5afef9d24900bd2ec7e28e6c8dd8d-runc.OYRIQi.mount: Deactivated successfully. Sep 13 00:01:50.173346 kubelet[2662]: I0913 00:01:50.173246 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fqbfr" podStartSLOduration=58.050386562 podStartE2EDuration="1m28.172745862s" podCreationTimestamp="2025-09-13 00:00:22 +0000 UTC" firstStartedPulling="2025-09-13 00:01:15.645522137 +0000 UTC m=+73.439370583" lastFinishedPulling="2025-09-13 00:01:45.767881437 +0000 UTC m=+103.561729883" observedRunningTime="2025-09-13 00:01:46.13534691 +0000 UTC m=+103.929195366" watchObservedRunningTime="2025-09-13 00:01:50.172745862 +0000 UTC m=+107.966594318" Sep 13 00:01:50.704420 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:57980.service - OpenSSH per-connection server daemon (10.0.0.1:57980). Sep 13 00:01:50.762434 sshd[6198]: Accepted publickey for core from 10.0.0.1 port 57980 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:50.766321 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:50.783407 systemd-logind[1545]: New session 26 of user core. Sep 13 00:01:50.792547 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:01:50.935898 sshd[6198]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:50.940178 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:57980.service: Deactivated successfully. Sep 13 00:01:50.943861 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:01:50.946136 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:01:50.947233 systemd-logind[1545]: Removed session 26. Sep 13 00:01:55.951282 systemd[1]: Started sshd@26-10.0.0.22:22-10.0.0.1:57994.service - OpenSSH per-connection server daemon (10.0.0.1:57994). Sep 13 00:01:55.992139 sshd[6238]: Accepted publickey for core from 10.0.0.1 port 57994 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:01:55.994409 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:55.999988 systemd-logind[1545]: New session 27 of user core. Sep 13 00:01:56.006549 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:01:56.152270 sshd[6238]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:56.157715 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:01:56.159252 systemd[1]: sshd@26-10.0.0.22:22-10.0.0.1:57994.service: Deactivated successfully. Sep 13 00:01:56.167685 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:01:56.169394 systemd-logind[1545]: Removed session 27. Sep 13 00:02:01.171359 systemd[1]: Started sshd@27-10.0.0.22:22-10.0.0.1:39130.service - OpenSSH per-connection server daemon (10.0.0.1:39130). Sep 13 00:02:01.213029 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 39130 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:02:01.214871 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:01.220036 systemd-logind[1545]: New session 28 of user core. Sep 13 00:02:01.230495 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:02:01.412654 sshd[6254]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:01.417558 systemd[1]: sshd@27-10.0.0.22:22-10.0.0.1:39130.service: Deactivated successfully. Sep 13 00:02:01.420707 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:02:01.421239 systemd-logind[1545]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:02:01.422706 systemd-logind[1545]: Removed session 28. Sep 13 00:02:01.448861 update_engine[1550]: I20250913 00:02:01.448760 1550 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:02:01.448861 update_engine[1550]: I20250913 00:02:01.448843 1550 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:02:01.450458 update_engine[1550]: I20250913 00:02:01.450424 1550 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:02:01.451253 update_engine[1550]: I20250913 00:02:01.451217 1550 omaha_request_params.cc:62] Current group set to lts Sep 13 00:02:01.451389 update_engine[1550]: I20250913 00:02:01.451368 1550 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:02:01.451389 update_engine[1550]: I20250913 00:02:01.451382 1550 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:02:01.451446 update_engine[1550]: I20250913 00:02:01.451406 1550 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:02:01.451481 update_engine[1550]: I20250913 00:02:01.451452 1550 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:02:01.451544 update_engine[1550]: I20250913 00:02:01.451521 1550 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:02:01.451544 update_engine[1550]: I20250913 00:02:01.451535 1550 omaha_request_action.cc:272] Request: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451544 update_engine[1550]: Sep 13 00:02:01.451818 update_engine[1550]: I20250913 00:02:01.451545 1550 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:02:01.457394 update_engine[1550]: I20250913 00:02:01.457341 1550 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:02:01.457730 update_engine[1550]: I20250913 00:02:01.457670 1550 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:02:01.465672 locksmithd[1606]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:02:01.472076 update_engine[1550]: E20250913 00:02:01.472007 1550 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:02:01.472251 update_engine[1550]: I20250913 00:02:01.472124 1550 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:02:02.361217 containerd[1571]: time="2025-09-13T00:02:02.360061200Z" level=info msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\"" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:02.909 [WARNING][6282] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"e037d004-e75e-4824-a87e-fd02a1b5adaa", ResourceVersion:"1303", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519", Pod:"calico-apiserver-57d7979648-crgkm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2917110676a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:02.912 [INFO][6282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:02.912 [INFO][6282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" iface="eth0" netns="" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:02.912 [INFO][6282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:02.912 [INFO][6282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:03.017 [INFO][6290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:03.018 [INFO][6290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:03.018 [INFO][6290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:03.027 [WARNING][6290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:03.027 [INFO][6290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:03.028 [INFO][6290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.035323 containerd[1571]: 2025-09-13 00:02:03.031 [INFO][6282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.044586 containerd[1571]: time="2025-09-13T00:02:03.044526846Z" level=info msg="TearDown network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" successfully" Sep 13 00:02:03.044586 containerd[1571]: time="2025-09-13T00:02:03.044566902Z" level=info msg="StopPodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" returns successfully" Sep 13 00:02:03.115860 containerd[1571]: time="2025-09-13T00:02:03.115777494Z" level=info msg="RemovePodSandbox for \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\"" Sep 13 00:02:03.118629 containerd[1571]: time="2025-09-13T00:02:03.118590942Z" level=info msg="Forcibly stopping sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\"" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.170 [WARNING][6310] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"e037d004-e75e-4824-a87e-fd02a1b5adaa", ResourceVersion:"1303", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8b7cbe80c7a73a9399106f80f4a4380cd103f3c7d3b4f278b4e8abe115c2519", Pod:"calico-apiserver-57d7979648-crgkm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2917110676a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.170 [INFO][6310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.170 [INFO][6310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" iface="eth0" netns="" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.170 [INFO][6310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.171 [INFO][6310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.202 [INFO][6318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.203 [INFO][6318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.203 [INFO][6318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.213 [WARNING][6318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.213 [INFO][6318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" HandleID="k8s-pod-network.94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Workload="localhost-k8s-calico--apiserver--57d7979648--crgkm-eth0" Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.215 [INFO][6318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.226848 containerd[1571]: 2025-09-13 00:02:03.219 [INFO][6310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63" Sep 13 00:02:03.227506 containerd[1571]: time="2025-09-13T00:02:03.226976970Z" level=info msg="TearDown network for sandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" successfully" Sep 13 00:02:03.342202 kubelet[2662]: E0913 00:02:03.341499 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:02:03.343013 containerd[1571]: time="2025-09-13T00:02:03.342958415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:03.361793 containerd[1571]: time="2025-09-13T00:02:03.361686558Z" level=info msg="RemovePodSandbox \"94ad43a97f9b800234e5832b947be0c4ad545a821e9657de9754eca99b952c63\" returns successfully" Sep 13 00:02:03.368791 containerd[1571]: time="2025-09-13T00:02:03.368732808Z" level=info msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\"" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.414 [WARNING][6335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--xww86-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"5cffd678-440b-46f8-bc05-0ed615dceeef", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc", Pod:"calico-apiserver-57d7979648-xww86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali273ab2ab71e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.414 [INFO][6335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.414 [INFO][6335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" iface="eth0" netns="" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.414 [INFO][6335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.414 [INFO][6335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.444 [INFO][6344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.445 [INFO][6344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.445 [INFO][6344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.451 [WARNING][6344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.451 [INFO][6344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.453 [INFO][6344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.460288 containerd[1571]: 2025-09-13 00:02:03.456 [INFO][6335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.460924 containerd[1571]: time="2025-09-13T00:02:03.460353553Z" level=info msg="TearDown network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" successfully" Sep 13 00:02:03.460924 containerd[1571]: time="2025-09-13T00:02:03.460391253Z" level=info msg="StopPodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" returns successfully" Sep 13 00:02:03.461259 containerd[1571]: time="2025-09-13T00:02:03.461169615Z" level=info msg="RemovePodSandbox for \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\"" Sep 13 00:02:03.461259 containerd[1571]: time="2025-09-13T00:02:03.461232664Z" level=info msg="Forcibly stopping sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\"" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.505 [WARNING][6361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d7979648--xww86-eth0", GenerateName:"calico-apiserver-57d7979648-", Namespace:"calico-apiserver", SelfLink:"", UID:"5cffd678-440b-46f8-bc05-0ed615dceeef", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7979648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"075eed7afbdae5fdd1e3e7a5e9ebdf9c8831af5fcd98c4f55d92170c9413e7fc", Pod:"calico-apiserver-57d7979648-xww86", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali273ab2ab71e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.506 [INFO][6361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.506 [INFO][6361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" iface="eth0" netns="" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.506 [INFO][6361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.506 [INFO][6361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.529 [INFO][6370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.529 [INFO][6370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.529 [INFO][6370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.536 [WARNING][6370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.536 [INFO][6370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" HandleID="k8s-pod-network.b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Workload="localhost-k8s-calico--apiserver--57d7979648--xww86-eth0" Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.537 [INFO][6370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.544222 containerd[1571]: 2025-09-13 00:02:03.541 [INFO][6361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c" Sep 13 00:02:03.544724 containerd[1571]: time="2025-09-13T00:02:03.544281034Z" level=info msg="TearDown network for sandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" successfully" Sep 13 00:02:03.549015 containerd[1571]: time="2025-09-13T00:02:03.548971141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:03.549216 containerd[1571]: time="2025-09-13T00:02:03.549068464Z" level=info msg="RemovePodSandbox \"b6b437d2a62556402296819fd929d4fcd13b673e01363e01b3fc11dc02b9912c\" returns successfully" Sep 13 00:02:03.549715 containerd[1571]: time="2025-09-13T00:02:03.549684619Z" level=info msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\"" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.590 [WARNING][6388] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" WorkloadEndpoint="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.590 [INFO][6388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.590 [INFO][6388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" iface="eth0" netns="" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.590 [INFO][6388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.590 [INFO][6388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.629 [INFO][6397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.629 [INFO][6397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.629 [INFO][6397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.638 [WARNING][6397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.639 [INFO][6397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.641 [INFO][6397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.648669 containerd[1571]: 2025-09-13 00:02:03.644 [INFO][6388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.649675 containerd[1571]: time="2025-09-13T00:02:03.649241456Z" level=info msg="TearDown network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" successfully" Sep 13 00:02:03.649675 containerd[1571]: time="2025-09-13T00:02:03.649279508Z" level=info msg="StopPodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" returns successfully" Sep 13 00:02:03.650367 containerd[1571]: time="2025-09-13T00:02:03.649992286Z" level=info msg="RemovePodSandbox for \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\"" Sep 13 00:02:03.650367 containerd[1571]: time="2025-09-13T00:02:03.650019767Z" level=info msg="Forcibly stopping sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\"" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.696 [WARNING][6414] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" WorkloadEndpoint="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.696 [INFO][6414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.696 [INFO][6414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" iface="eth0" netns="" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.696 [INFO][6414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.696 [INFO][6414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.724 [INFO][6422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.724 [INFO][6422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.724 [INFO][6422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.734 [WARNING][6422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.735 [INFO][6422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" HandleID="k8s-pod-network.990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Workload="localhost-k8s-whisker--5b8858cf9b--mrc6w-eth0" Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.737 [INFO][6422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.746280 containerd[1571]: 2025-09-13 00:02:03.742 [INFO][6414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f" Sep 13 00:02:03.747139 containerd[1571]: time="2025-09-13T00:02:03.746334306Z" level=info msg="TearDown network for sandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" successfully" Sep 13 00:02:03.751548 containerd[1571]: time="2025-09-13T00:02:03.751445477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:03.751623 containerd[1571]: time="2025-09-13T00:02:03.751604508Z" level=info msg="RemovePodSandbox \"990d10e36ba006b83a318fc8d2466bf52c9e34ded938ccd64a89e932f4af886f\" returns successfully" Sep 13 00:02:03.752424 containerd[1571]: time="2025-09-13T00:02:03.752373000Z" level=info msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\"" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.812 [WARNING][6441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0", GenerateName:"calico-kube-controllers-9cb7ff6b6-", Namespace:"calico-system", SelfLink:"", UID:"d3c5674c-deb9-482b-8272-20808de1f68c", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb7ff6b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa", Pod:"calico-kube-controllers-9cb7ff6b6-87ccw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa9704769f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.812 [INFO][6441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.813 [INFO][6441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" iface="eth0" netns="" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.813 [INFO][6441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.813 [INFO][6441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.883 [INFO][6449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.883 [INFO][6449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.883 [INFO][6449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.890 [WARNING][6449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.890 [INFO][6449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.893 [INFO][6449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.901787 containerd[1571]: 2025-09-13 00:02:03.897 [INFO][6441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:03.901787 containerd[1571]: time="2025-09-13T00:02:03.901526412Z" level=info msg="TearDown network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" successfully" Sep 13 00:02:03.901787 containerd[1571]: time="2025-09-13T00:02:03.901570486Z" level=info msg="StopPodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" returns successfully" Sep 13 00:02:03.902449 containerd[1571]: time="2025-09-13T00:02:03.902177673Z" level=info msg="RemovePodSandbox for \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\"" Sep 13 00:02:03.902449 containerd[1571]: time="2025-09-13T00:02:03.902213221Z" level=info msg="Forcibly stopping sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\"" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.969 [WARNING][6467] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0", GenerateName:"calico-kube-controllers-9cb7ff6b6-", Namespace:"calico-system", SelfLink:"", UID:"d3c5674c-deb9-482b-8272-20808de1f68c", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb7ff6b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cfafe279d1967cbc26e9a82d8c80522f34467878a414fafeea10bdd757a5efa", Pod:"calico-kube-controllers-9cb7ff6b6-87ccw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa9704769f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.970 [INFO][6467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.970 [INFO][6467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" iface="eth0" netns="" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.970 [INFO][6467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.970 [INFO][6467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.998 [INFO][6475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.999 [INFO][6475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:03.999 [INFO][6475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:04.202 [WARNING][6475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:04.202 [INFO][6475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" HandleID="k8s-pod-network.378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Workload="localhost-k8s-calico--kube--controllers--9cb7ff6b6--87ccw-eth0" Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:04.204 [INFO][6475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.212415 containerd[1571]: 2025-09-13 00:02:04.207 [INFO][6467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b" Sep 13 00:02:04.213206 containerd[1571]: time="2025-09-13T00:02:04.212489479Z" level=info msg="TearDown network for sandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" successfully" Sep 13 00:02:04.240781 containerd[1571]: time="2025-09-13T00:02:04.240727001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.240941 containerd[1571]: time="2025-09-13T00:02:04.240818284Z" level=info msg="RemovePodSandbox \"378ccc894d772389e53d9ad8770e3e5bf80b4153676f4e22fdadfb6991a1502b\" returns successfully" Sep 13 00:02:04.241364 containerd[1571]: time="2025-09-13T00:02:04.241335912Z" level=info msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\"" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.284 [WARNING][6492] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d4868fae-b2aa-4106-8bb0-b85ce9739178", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18", Pod:"coredns-7c65d6cfc9-7tb5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283a82838fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.285 [INFO][6492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.285 [INFO][6492] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" iface="eth0" netns="" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.285 [INFO][6492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.285 [INFO][6492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.324 [INFO][6500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.324 [INFO][6500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.325 [INFO][6500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.331 [WARNING][6500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.331 [INFO][6500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.333 [INFO][6500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.341430 containerd[1571]: 2025-09-13 00:02:04.337 [INFO][6492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.342730 containerd[1571]: time="2025-09-13T00:02:04.341444739Z" level=info msg="TearDown network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" successfully" Sep 13 00:02:04.342730 containerd[1571]: time="2025-09-13T00:02:04.341480205Z" level=info msg="StopPodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" returns successfully" Sep 13 00:02:04.342730 containerd[1571]: time="2025-09-13T00:02:04.342146364Z" level=info msg="RemovePodSandbox for \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\"" Sep 13 00:02:04.342730 containerd[1571]: time="2025-09-13T00:02:04.342182874Z" level=info msg="Forcibly stopping sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\"" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.382 [WARNING][6518] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d4868fae-b2aa-4106-8bb0-b85ce9739178", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b569d609bcc30f508c3aa14a31202111bc3c7095640f8e17b440eef913a4bd18", Pod:"coredns-7c65d6cfc9-7tb5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali283a82838fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.383 [INFO][6518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.383 [INFO][6518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" iface="eth0" netns="" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.383 [INFO][6518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.383 [INFO][6518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.411 [INFO][6526] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.411 [INFO][6526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.411 [INFO][6526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.418 [WARNING][6526] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.418 [INFO][6526] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" HandleID="k8s-pod-network.73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Workload="localhost-k8s-coredns--7c65d6cfc9--7tb5v-eth0" Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.420 [INFO][6526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.428460 containerd[1571]: 2025-09-13 00:02:04.425 [INFO][6518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437" Sep 13 00:02:04.429376 containerd[1571]: time="2025-09-13T00:02:04.428522294Z" level=info msg="TearDown network for sandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" successfully" Sep 13 00:02:04.433429 containerd[1571]: time="2025-09-13T00:02:04.433392920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.433511 containerd[1571]: time="2025-09-13T00:02:04.433459756Z" level=info msg="RemovePodSandbox \"73b64521e2b1779b2e452771577498f920b6ade0cad8efd8073dfc61819ee437\" returns successfully" Sep 13 00:02:04.433991 containerd[1571]: time="2025-09-13T00:02:04.433965492Z" level=info msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\"" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.472 [WARNING][6544] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fqbfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba4c914c-84cd-4650-afa0-e3d23d56f99f", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b", Pod:"csi-node-driver-fqbfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califae63bc47b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.473 [INFO][6544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.473 [INFO][6544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" iface="eth0" netns="" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.473 [INFO][6544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.473 [INFO][6544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.498 [INFO][6553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.498 [INFO][6553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.498 [INFO][6553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.505 [WARNING][6553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.505 [INFO][6553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.508 [INFO][6553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.515306 containerd[1571]: 2025-09-13 00:02:04.511 [INFO][6544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.515306 containerd[1571]: time="2025-09-13T00:02:04.515262873Z" level=info msg="TearDown network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" successfully" Sep 13 00:02:04.515306 containerd[1571]: time="2025-09-13T00:02:04.515300434Z" level=info msg="StopPodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" returns successfully" Sep 13 00:02:04.516065 containerd[1571]: time="2025-09-13T00:02:04.515992422Z" level=info msg="RemovePodSandbox for \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\"" Sep 13 00:02:04.516065 containerd[1571]: time="2025-09-13T00:02:04.516160781Z" level=info msg="Forcibly stopping sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\"" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.573 [WARNING][6571] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fqbfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba4c914c-84cd-4650-afa0-e3d23d56f99f", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4713edd730dccee1918f7368ded3703936c78bc49fa64dc404f0a153869e478b", Pod:"csi-node-driver-fqbfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califae63bc47b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.580 [INFO][6571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.580 [INFO][6571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" iface="eth0" netns="" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.580 [INFO][6571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.580 [INFO][6571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.661 [INFO][6580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.661 [INFO][6580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.661 [INFO][6580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.671 [WARNING][6580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.671 [INFO][6580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" HandleID="k8s-pod-network.b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Workload="localhost-k8s-csi--node--driver--fqbfr-eth0" Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.674 [INFO][6580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.682276 containerd[1571]: 2025-09-13 00:02:04.678 [INFO][6571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db" Sep 13 00:02:04.683089 containerd[1571]: time="2025-09-13T00:02:04.683033236Z" level=info msg="TearDown network for sandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" successfully" Sep 13 00:02:04.696575 containerd[1571]: time="2025-09-13T00:02:04.696514097Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.696749 containerd[1571]: time="2025-09-13T00:02:04.696603416Z" level=info msg="RemovePodSandbox \"b1107d67b657ebbf52513d68f583b111a37095f7b6f5a29b3dbcf62afd1f23db\" returns successfully" Sep 13 00:02:04.697616 containerd[1571]: time="2025-09-13T00:02:04.697590702Z" level=info msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\"" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.741 [WARNING][6598] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e3bb006-8c83-4e24-90c2-984bdd355c97", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d", Pod:"coredns-7c65d6cfc9-98zzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f5ba53e98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.741 [INFO][6598] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.741 [INFO][6598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" iface="eth0" netns="" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.741 [INFO][6598] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.741 [INFO][6598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.767 [INFO][6608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.767 [INFO][6608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.767 [INFO][6608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.773 [WARNING][6608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.773 [INFO][6608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.775 [INFO][6608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.785479 containerd[1571]: 2025-09-13 00:02:04.780 [INFO][6598] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.785479 containerd[1571]: time="2025-09-13T00:02:04.785232374Z" level=info msg="TearDown network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" successfully" Sep 13 00:02:04.785479 containerd[1571]: time="2025-09-13T00:02:04.785260427Z" level=info msg="StopPodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" returns successfully" Sep 13 00:02:04.787360 containerd[1571]: time="2025-09-13T00:02:04.787317885Z" level=info msg="RemovePodSandbox for \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\"" Sep 13 00:02:04.787360 containerd[1571]: time="2025-09-13T00:02:04.787356679Z" level=info msg="Forcibly stopping sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\"" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.839 [WARNING][6625] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e3bb006-8c83-4e24-90c2-984bdd355c97", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba9d899a09b100694fb801a8acd44dca422d60c13919ecbb2e2e3d0b5018ec3d", Pod:"coredns-7c65d6cfc9-98zzs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f5ba53e98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.839 [INFO][6625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.839 [INFO][6625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" iface="eth0" netns="" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.839 [INFO][6625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.839 [INFO][6625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.879 [INFO][6634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.879 [INFO][6634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.879 [INFO][6634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.886 [WARNING][6634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.886 [INFO][6634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" HandleID="k8s-pod-network.936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Workload="localhost-k8s-coredns--7c65d6cfc9--98zzs-eth0" Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.888 [INFO][6634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.895809 containerd[1571]: 2025-09-13 00:02:04.892 [INFO][6625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13" Sep 13 00:02:04.895809 containerd[1571]: time="2025-09-13T00:02:04.895739755Z" level=info msg="TearDown network for sandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" successfully" Sep 13 00:02:04.905366 containerd[1571]: time="2025-09-13T00:02:04.905309713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.905493 containerd[1571]: time="2025-09-13T00:02:04.905438246Z" level=info msg="RemovePodSandbox \"936cd273b6e769b8daa0a38ec0853dbfcde505faa5f09df1bf4500c677912d13\" returns successfully" Sep 13 00:02:04.906192 containerd[1571]: time="2025-09-13T00:02:04.906162615Z" level=info msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\"" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.950 [WARNING][6651] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--wc4bn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f317c9d-be74-45c8-9172-3b8f509591a6", ResourceVersion:"1451", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679", Pod:"goldmane-7988f88666-wc4bn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63cb5088a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.951 [INFO][6651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.951 [INFO][6651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" iface="eth0" netns="" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.951 [INFO][6651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.951 [INFO][6651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.974 [INFO][6660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.974 [INFO][6660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.974 [INFO][6660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.980 [WARNING][6660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.980 [INFO][6660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.981 [INFO][6660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.988096 containerd[1571]: 2025-09-13 00:02:04.984 [INFO][6651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:04.988510 containerd[1571]: time="2025-09-13T00:02:04.988175038Z" level=info msg="TearDown network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" successfully" Sep 13 00:02:04.988510 containerd[1571]: time="2025-09-13T00:02:04.988231113Z" level=info msg="StopPodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" returns successfully" Sep 13 00:02:04.988890 containerd[1571]: time="2025-09-13T00:02:04.988870632Z" level=info msg="RemovePodSandbox for \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\"" Sep 13 00:02:04.988952 containerd[1571]: time="2025-09-13T00:02:04.988896821Z" level=info msg="Forcibly stopping sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\"" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.024 [WARNING][6677] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--wc4bn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f317c9d-be74-45c8-9172-3b8f509591a6", ResourceVersion:"1451", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e87193152598e0bb3c20f8d9ccd4bdbe6fa7cfc482e3f912800b8dfb8984679", Pod:"goldmane-7988f88666-wc4bn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63cb5088a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.024 [INFO][6677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.024 [INFO][6677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" iface="eth0" netns="" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.024 [INFO][6677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.024 [INFO][6677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.046 [INFO][6685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.047 [INFO][6685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.047 [INFO][6685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.054 [WARNING][6685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.054 [INFO][6685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" HandleID="k8s-pod-network.30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Workload="localhost-k8s-goldmane--7988f88666--wc4bn-eth0" Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.055 [INFO][6685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:05.062728 containerd[1571]: 2025-09-13 00:02:05.059 [INFO][6677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06" Sep 13 00:02:05.062728 containerd[1571]: time="2025-09-13T00:02:05.062629991Z" level=info msg="TearDown network for sandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" successfully" Sep 13 00:02:05.067699 containerd[1571]: time="2025-09-13T00:02:05.067624910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:05.067794 containerd[1571]: time="2025-09-13T00:02:05.067748374Z" level=info msg="RemovePodSandbox \"30225f3ab7b1cc8140d2b604ee313878ec8253d578698612b8bef411a1ba7f06\" returns successfully" Sep 13 00:02:06.421942 systemd[1]: Started sshd@28-10.0.0.22:22-10.0.0.1:39136.service - OpenSSH per-connection server daemon (10.0.0.1:39136). Sep 13 00:02:06.500030 sshd[6692]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:02:06.502811 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:06.509097 systemd-logind[1545]: New session 29 of user core. Sep 13 00:02:06.522582 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:02:06.915794 sshd[6692]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:06.921554 systemd[1]: sshd@28-10.0.0.22:22-10.0.0.1:39136.service: Deactivated successfully. Sep 13 00:02:06.927211 systemd-logind[1545]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:02:06.927996 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:02:06.930399 systemd-logind[1545]: Removed session 29.